Quote:
Originally Posted by Sergei Steshenko
C++ by itself does not imply overhead.
|
My experience indicates otherwise. In expert hands, the extra cost is just the dependency on (a specific version of) libstdc++. (Note that all programs, including C++ ones, depend on libc in Linux; either at link time or at run time. C++ binaries are larger, especially since every compiler version tends to produce a new non-downwards-compatible libstdc++.)
Before this thread, I believed it would be difficult to avoid using virtual methods and exceptions (which do have overhead); I thought the standard C++ library itself uses these features internally. I guess I was wrong, and the only unavoidable overhead is the standard C++ library dependency.
I'm not sure at which point g++ links in the standard C++ library, though. Both
gcc -std=c99 and
g++ -std=c++03 seem to produce the same binary when fed a C source file.
Quote:
Originally Posted by Sergei Steshenko
With great (though not absolute) precision one can say that C++ is simply a superset of "C".
|
Syntax yes, but languages no. They are governed by different standards. Although almost all of C code compiles as C++, it does not necessarily have the same semantics. Many are the same, but some important ones differ.
Consider Kahan summation algorithm:
Code:
double sum; /* Accumulator */
double err; /* Error accumulator */
double input; /* Value to add */
double tmp1, tmp2;
tmp1 = input - err;
tmp2 = sum + tmp1;
err = (tmp2 - sum) - tmp1;
sum = tmp2;
Traditionally, this code would have to be compiled without optimization, or some specific compiler settings, or the compiler would optimize it away yielding incorrect results.
In C99, the casting rules state that the compiler must retain the precision of a cast expression; optimization may not lead to a case where the cast expression is evaluated at a higher precision, even implicitly. In other words, given
Code:
tmp1 = (double)( input - err );
tmp2 = (double)( sum + tmp1 );
err = (double)( (double)( tmp2 - sum ) - tmp1 );
sum = tmp2;
a C99 compiler will happily optimize and still always produce correct code. As far as I can tell -- please correct me if I'm wrong --, the C++ standard only defines the results if the cast type differs: a C++ compiler may optimize the err factor completely out, yielding different results.
I think g++ does apply the same casting rules for C++. I believe Intel Compiler Collection does not, but I'm too lazy to check (I do not have it installed at home right now), so I might well be wrong.
In practice, I think most compilers nowadays apply optimization strategies that do not affect the result, but I do not see anything in the standards that would require them to do so, but which is sensible in practice because otherwise a lot of computational code would have to be compiled without optimizations to get the correct numerical results.
Quote:
Originally Posted by Sergei Steshenko
If you think "C" is faster
|
No. I am saying C++ is not faster.
Quote:
Originally Posted by Sergei Steshenko
Again, I do not like C++, I find it convoluted, tangled and illogical. But I'm trying to be objective.
|
I really do appreciate the discussion.