LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 05-22-2012, 06:09 PM   #16
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197

Quote:
Originally Posted by alaios View Post
Why we need C
Mainly because many (I think misguided) programmers prefer it.

Quote:
1. Is it speed? How slower c++ or c# are?
C# is slower.

In the hands of an expert, C++ is equal or occasionally faster execution speed than C.
In the hands of a beginner, there are lots of ways C++ may yield code that is far slower than the same beginner would have written in C. If you care about execution speed of a program, don't have a beginner write it.

Quote:
2. Is it interoperability. Is c++ or java less portable between different os?
Portability is a many faceted question. In most aspects java is more portable than C. Interoperability is a related but different complicated question. In many ways C is more interoperable than java.
Without extern "C" there are a lot of interoperability issues with C++ that are fundamentally worse than C. But C++ includes the extern "C" that gives specific functions exactly the same interoperability as C functions. That C++ feature should not just be used for interoperation with C functions. There are often good reasons to use an extern "C" declaration for a C++ function that will be called only by other C++ functions (most common case is entry points for a .so file that must work even when the .so and main executable are compiled by C++ compilers with differing mangling rules).

Quote:
3. What makes it favorable and still used for server-based code.
Mainly project inertia. Programmers joining an open source project coded in C tend to self select as those having an (I think irrational) preference for C over C++. The decision makers for the future of the project (write new modules in C or C++) thus have been selected to be biased in favor of C.

Those effects are much less strong in the closed source world, but even there new C code is often written due to project inertia.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 05-22-2012, 06:45 PM   #17
Nominal Animal
Senior Member
 
Registered: Dec 2010
Location: Finland
Distribution: Xubuntu, CentOS, LFS
Posts: 1,723
Blog Entries: 3

Rep: Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948Reputation: 948
Quote:
Originally Posted by johnsfine View Post
In the hands of an expert, C++ is equal or occasionally faster execution speed than C.
Only if the C is written by a C++ programmer, or if the C++ compiler has better optimization support than the C compiler. It is C++ that has the extra overhead, not C. Any code you can write in C++, you can implement in C also. I know you think C++ is superior to or successor to C, but there really is no basis for such belief in reality.

I have never, ever seen a case where expert C++ code was as fast as expert C code solving the same problem. Perhaps you could supply an example?

Edited to add: I do concede that it is easier (perhaps more economical?) to write efficient code in C++ than in C, and that it takes much more effort to become an expert C programmer: there are a lot more pitfalls and bad choices in C than in C++. For many tasks, it makes much more sense to use C++ than C.

Last edited by Nominal Animal; 05-22-2012 at 06:51 PM.
 
Old 05-22-2012, 06:51 PM   #18
ejspeiro
Member
 
Registered: Feb 2011
Distribution: Ubuntu 14.04 LTS (Trusty Tahr)
Posts: 203

Rep: Reputation: 26
Exclamation

Quote:
I have never, ever seen a case where expert C++ code was as fast as expert C code solving the same problem. Perhaps you could supply an example?
Yes please! I am also interested in such an example.
 
Old 05-23-2012, 06:12 AM   #19
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Nominal Animal View Post
Any code you can write in C++, you can implement in C also.
I meant within the bounds of practicality.

Expert coded C++ is faster than expert coded C most often when templates are used for performance.

With templates, you can write a single version of source code that will be compiled inline each time it is used, but into potentially many different versions. You can easily force the optimizer to see information about specific usage that the optimizer normally can't see or use.

In C you could code each different version separately with different names, but often the details you want to force the optimizer to see are passed through intermediate calling layers (all inlined) so you start to need combinatorially many versions of the intermediate layers.

When I started using C++, I disliked its tendency to "waste" code space with this kind of absurd stretch of time/space tradeoff favoring time. Doing a job slightly faster with 50MB of binary code than you could have done it with 1MB just didn't make sense. On the things I work on today, any time the 50MB version is a little faster than the 1MB version, the 50MB version is what my customers would prefer.

In less performance critical tasks, I look at C++ as just a more convenient form of C. I strongly believe in using the appropriate subset of C++. When you really care about about program size, that may be a subset not terribly different from C itself. But even there, I prefer the syntax and naming advantages of C++.

For example, when I programmed in C long ago, even though I always cared a lot about program size, I was working on fairly complex projects. Naming functions was always an annoyance in that environment. I hate super long function names. Frequently you have the same basic operation done by several different functions (different because they are done for or with different structures or base types). I hated needing to decorate those function names with extra info to keep them unique. The first feature I liked about C++ is it takes care of that for you. When several different functions all logically should have the same name (and differ only by the count and/or type of their inputs) you just give them the same name. It helps to work in an IDE that knows how to "go to definition" from a function use. If the multiple definitions of the same name make it hard to navigate source code with your text editor, I think the fault is in the tool (text editor) not the language.
 
Old 05-23-2012, 06:29 AM   #20
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by Doc CPU View Post
...
at that point, I felt the impulse to argue, because I didn't see yet that you distinguish between a runtime and the standard libraries - because for me, that's the same. But I continued reading first.

No, they are not the same.

"C" standard defines requirements to standard library. But to run a "C" program one does not necessarily need a standard library.

The best way to understand this is to buy a cheap controller and to program it at bare metal level, without an OS.

In the standard library we have, for example, 'printf' function which requires runtime because stdout is a part of runtime, as all the file IO is.

OTOH hand, we have, say, 'cos' function which is just a stand-alone pure (stateless) function and as such doesn't require runtime.

In the end of eighties I was writing "C" for bare metal, and for diagnostic output I was using predefined by me areas of VGA memory. Even the compiled program itself was residing in other portions of VGA memory - they were not displayed, VGA has a number of pages.
 
Old 05-23-2012, 09:18 AM   #21
orgcandman
Member
 
Registered: May 2002
Location: new hampshire
Distribution: Fedora, RHEL
Posts: 600

Rep: Reputation: 110Reputation: 110
Quote:
Originally Posted by Nominal Animal View Post
Only if the C is written by a C++ programmer, or if the C++ compiler has better optimization support than the C compiler. It is C++ that has the extra overhead, not C. Any code you can write in C++, you can implement in C also. I know you think C++ is superior to or successor to C, but there really is no basis for such belief in reality.

I have never, ever seen a case where expert C++ code was as fast as expert C code solving the same problem. Perhaps you could supply an example?

Edited to add: I do concede that it is easier (perhaps more economical?) to write efficient code in C++ than in C, and that it takes much more effort to become an expert C programmer: there are a lot more pitfalls and bad choices in C than in C++. For many tasks, it makes much more sense to use C++ than C.
I can - look at std::valarray. There is nothing in standard C that completely approaches it's functionality. Additionally, tests with gnu c++ 4.4 show that in 9 of 10 various vectorizing cases, the optimization produced is on-par with someone hand-hacking the assembly. Again, this is something that C CANNOT PROVIDE WITHIN THE STANDARD.

The idea that C has less overhead than C++ was true in 1998, when optimizing compilers for the language didn't have the flow path analysis to know which code could be reduced. It is no longer true, and there are very few reasons to use C over C++, apart from the code-space bloat inherent with templates (unless you make heavy use of code that, under c++ would throw exceptions).

Additionally, anyone claiming that C++ has 'extra magic' going on behind the scenes doesn't spend enough time reading the standards. This is an oft-repeated myth. The only extra overhead that can be associated with C++ is when exceptions are thrown (note: thrown - if they are never thrown, there is no additional overhead since there is no need to walk the stack and cleanup) OR you make use of dynamic casts or dynamic RTTI (both of which indicate that you probably failed to design your software correctly). C and C++ both have a static initialization phase (take the time to study your GOT). new is just malloc + initialization code (and what C programmer mallocs memory without initializing it?). delete is free. Heck, there are nothrow versions of new/delete, eliminating most of the exceptions which are thrown (others come from the use of RTTI, and certain cast operations).

Quote:
Originally Posted by Doc CPU View Post
its own memory management, automatic calling of constructors and destructors, garbage collection
What C++ language are you using? One of the biggest nerd humor jokes coming out of C++11 is that they added multi-threading and atomicity primitives to the language, but didn't add garbage collection primitives (after C++98, the standards body said that GC was likely to enter the language before multi-threading). "automatic calling" of constructors and destructors? What do you mean? What memory management do you think exists in the standard (actually, c++11 adds the shared_ptr class, which you MIGHT think of as a primitive garbage collector).

Quote:
Originally Posted by millgates View Post
OK. What is a "C virtual machine"?
The C Virtual Machine is the fictional machine providing all of the requisites of the C language standard, and none of the undefined parts.

As an illustration, I'll point to floating point numbers. These are GUARANTEED to exist on the C virtual machine. They do NOT exist on, say, most ARM systems (or PPC, or MIPS). YET, you can code C on ARM. This is because the compiler embeds a soft-float implementation (or a virtual floating point processor). Additional examples include saturation vs. wrap on ints, as well as integer type sizes. These are all things that the compiler writers MUST provide if the actual machine differs from the C virtual machine.

Quote:
Originally Posted by millgates View Post
THE REST OF THE POST
You didn't understand what I was writing. If you did, you'd see that you are actually supporting every assertion I made, but for one thing you said:

Quote:
Originally Posted by millgates View Post
Of course, each language performs well in what it was designed for and poorly in what it was not designed for ... However, this is not the case in general.
When it comes to writing software there is no general case. When comparing things like "efficiency," "speed," etc, you must use concrete cases. You might say that C/C++ has more cases where a good compiler can output "better" code (code which meets the definitions of 'faster' and 'tighter') - and you may very well be correct. But there exists no general case software, as far as I'm concerned.

Anyway, this is all off-topic. There are many very good reasons why C will continue to need to exist (mainly because there ARE still 256k systems being produced which need code - and we don't write in assembly anymore).

Last edited by orgcandman; 05-24-2012 at 04:43 PM. Reason: Clarity
 
Old 05-23-2012, 09:23 AM   #22
Doc CPU
Senior Member
 
Registered: Jun 2011
Location: Stuttgart, Germany
Distribution: Mint, Debian, Gentoo, Win 2k/XP
Posts: 1,099

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Hi there,

Quote:
Originally Posted by Sergei Steshenko View Post
No, they are not the same.
runtime and standard library? Then maybe we should be using other expressions that describe it more clearly.

From my POV there is the application itself, then the standard lib containing the platform dependent implementation of functions like printf, and usually some sort of OS, or BIOS. So where's your runtime? Is it the OS? I don't think so, it's rather the layer with the implementation of the standard function - the standard library.

Quote:
Originally Posted by Sergei Steshenko View Post
"C" standard defines requirements to standard library. But to run a "C" program one does not necessarily need a standard library.
Correct.

Quote:
Originally Posted by Sergei Steshenko View Post
The best way to understand this is to buy a cheap controller and to program it at bare metal level, without an OS.
As an occasional µC developer, I'm quite familiar with that scenario. However, even there you might have an implementation of the standard lib, maybe a specialized one where many functions are just stubs returning an error code, if the hardware doesn't support it appropriately. Or there's the functionality of stdout, which writes to an alphanumeric two-line LCD. Or just a generic serial interface. You had the example with writing to VGA memory.
About fifteen years ago, I developed for a PC-like platform, the hardware being developed by the company itself, the OS being some kind of "Embedded DOS" developed by a former colleague of mine - only it wasn't DOS, it just had the same API (though many function calls were stubs that did nothing, as I described before). That was fine, because we could use the plain Borland C compiler for DOS, though there were a few things we couldn't do on that platform.

Quote:
Originally Posted by Sergei Steshenko View Post
In the standard library we have, for example, 'printf' function which requires runtime because stdout is a part of runtime, as all the file IO is.
From my perspective, the bunch of functions including printf (plus the data they need to work) is the runtime. On typical PC platforms, these functions do hardly more than pass the arguments to appropriate OS calls, however.

[X] Doc CPU
 
Old 05-23-2012, 09:37 AM   #23
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Quote:
Originally Posted by ejspeiro View Post
Funny note: Look for the definition of C in the Glossary provided in [1]!
I could not find the glossary ... ?
 
Old 05-23-2012, 09:46 AM   #24
Doc CPU
Senior Member
 
Registered: Jun 2011
Location: Stuttgart, Germany
Distribution: Mint, Debian, Gentoo, Win 2k/XP
Posts: 1,099

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Hi there,

Quote:
Originally Posted by orgcandman View Post
Additionally, anyone claiming that C++ has 'extra magic' going on behind the scenes doesn't spend enough time reading the standards. This is an oft-repeated myth. The only extra overhead ...
I'm not talking about extra overhead. I'm talking about things that happen magically (yes, I'll stick to that term) without being obvious to the programmer. Like selecting a particular scope and referencing it with the this keyword. Like passing this as a hidden parameter to every method to implement that magic. Like selecting one out of several methods with identical names by type and number of arguments. To me, that all looks like evil attempts of obfuscation. I'm in for clarity.

Quote:
Originally Posted by orgcandman View Post
What C++ language are you using?
None at all, if I can help it. I was introduced to C++ in the mid-nineties, found it gruesomely mystical, but could cope with it more or less the few times I had to. However, I participated in about a handfull of C++ projects until 2000, and haven't used it ever since - apart from minor changes to one or the other working project.

Quote:
Originally Posted by orgcandman View Post
"automatic calling" of constructors and destructors? What do you mean?
The bare fact that for example a simple delete foo; can involve the execution of additional code (the destructor), which is not obvious from looking at the statement.

Quote:
Originally Posted by orgcandman View Post
There are many very good reasons why C will continue to need to exist (mainly because there ARE still 256k systems being produced which need code - and we don't write in assembly anymore).
Oh yes, we do. Not so much on desktop systems, but very frequently in the field of embedded systems with limited hardware.

[X] Doc CPU
 
Old 05-23-2012, 11:14 AM   #25
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by Doc CPU View Post
...
From my perspective, the bunch of functions including printf (plus the data they need to work) is the runtime. On typical PC platforms, these functions do hardly more than pass the arguments to appropriate OS calls, however.

[X] Doc CPU
As I see it, 'runtime' is the part which interfaces with the OS. Basically, you write the same. I was trying to say that standard library has an OS-independent part and that IO should not necessarily be as the standard describes - there may be other implementations.
 
Old 05-23-2012, 01:10 PM   #26
ejspeiro
Member
 
Registered: Feb 2011
Distribution: Ubuntu 14.04 LTS (Trusty Tahr)
Posts: 203

Rep: Reputation: 26
Quote:
Quote:
Originally Posted by ejspeiro View Post
Funny note: Look for the definition of C in the Glossary provided in [1]!
I could not find the glossary ... ?
Yes, I knew it was not there when I posted the reference, however, the reference stands. It is just a matter of taking a look at the actual book's glossary. Maybe in a Google Book's preview? Or at Amazon?
 
Old 05-24-2012, 11:17 AM   #27
dogpatch
Member
 
Registered: Nov 2005
Location: Central America
Distribution: Mepis, Android
Posts: 490
Blog Entries: 4

Rep: Reputation: 238Reputation: 238Reputation: 238
Quote:
Originally Posted by Doc CPU View Post
My favorite programming model is a mix of C and assembly language.
I agree.

Quote:
Originally Posted by NevemTeve View Post
Off-topic: Well, why is Cobol 'still out there'?
Cobol (COmmon Business-Orinted Language) was intended specifically for business applications such as accounting and auditing. It was never intended to satisfy general programming needs.

Its implementation of integer math for all numeric functions eliminates rounding errors associated with floating point math, and the loss / gain of a few pennies that drive accountants mad.

Its simple but rigorous structure (File Division, Data Division, etc.) and verbose English-like syntax tends to foster a readable, outline-style code and top-down structure that is much easier to maintain than other programming languages.

As a personal note, in the late 80s, i created my own run-time speed test (under DOS) comparing a C program compiled with the Microsoft Optimizing C compiler, and a Cobol program performing the same exact task (math, data moves, etc.) compiled by Realia Cobol (the best Cobol compiler around). The Realia executable was a fraction of the size and almost three times faster than the Microsoft C executable.

Last edited by dogpatch; 05-24-2012 at 11:18 AM. Reason: fix first quote
 
Old 05-24-2012, 12:29 PM   #28
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by dogpatch View Post
...
Its implementation of integer math for all numeric functions eliminates rounding errors associated with floating point math, and the loss / gain of a few pennies that drive accountants mad.
...

I'm wondering.

Suppose you have a $1 deposit and 1.23% annual yield on some kind of deposit. How are you going to implement in integer realm the $.00123 gain after the first year ? How are you going to implement the 1.23 cent gain in integer realm ?
 
Old 05-24-2012, 01:02 PM   #29
Doc CPU
Senior Member
 
Registered: Jun 2011
Location: Stuttgart, Germany
Distribution: Mint, Debian, Gentoo, Win 2k/XP
Posts: 1,099

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Hi there,

Quote:
Originally Posted by Sergei Steshenko View Post
I'm wondering.

Suppose you have a $1 deposit and 1.23% annual yield on some kind of deposit. How are you going to implement in integer realm the $.00123 gain after the first year ? How are you going to implement the 1.23 cent gain in integer realm ?
Suppose you were going to implement such a software, and suppose that -for whatever reason- you decided to do all calculation integer-based. As you point out in your example, you can get figures in the realm below cents whenever multiplication or division is involved. (By the way, your figures are wrong: With $1 and 1.23%, you gain $.0123 after a year - you missed by a factor of ten.)

So you must make a trade-off. You have to set a limit to the accuracy. One way to do that is to store all monetary values as, say thousandths of a cent. Then you can do very precise calculations. Of course, you can always find an example where there might be a noticeable error. That's the trade-off I meant.

And it's not better with flotaing point numbers, they also have a limited accuracy. IEEE single-presision floats, for instance, use 23bits for the mantissa, that's about 6..7 valid digits in decimal. The consequence is that if you have amounts in the range of millions, you can't be accurate to a cent any more. Double-precision float pushes the limit, but the theoretical problem remains.

[X] Doc CPU
 
Old 05-24-2012, 01:25 PM   #30
Sergei Steshenko
Senior Member
 
Registered: May 2005
Posts: 4,481

Rep: Reputation: 454Reputation: 454Reputation: 454Reputation: 454Reputation: 454
Quote:
Originally Posted by Doc CPU View Post
Hi there,



Suppose you were going to implement such a software, and suppose that -for whatever reason- you decided to do all calculation integer-based. As you point out in your example, you can get figures in the realm below cents whenever multiplication or division is involved. (By the way, your figures are wrong: With $1 and 1.23%, you gain $.0123 after a year - you missed by a factor of ten.)

So you must make a trade-off. You have to set a limit to the accuracy. One way to do that is to store all monetary values as, say thousandths of a cent. Then you can do very precise calculations. Of course, you can always find an example where there might be a noticeable error. That's the trade-off I meant.

And it's not better with flotaing point numbers, they also have a limited accuracy. IEEE single-presision floats, for instance, use 23bits for the mantissa, that's about 6..7 valid digits in decimal. The consequence is that if you have amounts in the range of millions, you can't be accurate to a cent any more. Double-precision float pushes the limit, but the theoretical problem remains.

[X] Doc CPU
So, "implementation of integer math for all numeric functions eliminates rounding errors associated with floating point" is a baseless statement - with long enough mantissa floating point numbers are no worse and may be even better than integers.

On 32 bit machine integers are typically 32 bits while doubles have 53 bits long mantissa (IIRC), so doubles are already better out of the box. And now 'gcc' supports already 'long double': http://en.wikipedia.org/wiki/Long_double .

And, of course, there is http://gmplib.org/ :

Quote:
GMP is a free library for arbitrary precision arithmetic, operating on signed integers, rational numbers, and floating point numbers. There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 07:43 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration