Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
06-22-2010, 09:27 AM
|
#1
|
LQ Newbie
Registered: May 2009
Location: kolkata,india
Distribution: fedora
Posts: 6
Rep:
|
Can any one please explain this output?
Code:
#include<cstdio>
#include<cmath>
int main(){
printf("%lf\n", log(8)/log(2) );
//case1:
double v=log(8)/log(2);
printf("%lld\n",(long long)v);
//case2:
printf("%lld\n",(long long)( log(8)/log(2) ));
//case3:
printf("%lld\n",(long long)(double)( log(8)/log(2) ));
//case4:
printf("%lld\n",(long long)(float)( log(8)/log(2) ));
}
output:
Why is it not working properly while typecasting a double to long long,
but it works fine when we downcast double to float and then float to long long?
|
|
|
Click here to see the post LQ members have rated as the most helpful post in this thread.
|
06-22-2010, 09:39 AM
|
#2
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
32 bit x86 architecture uses 80 bit floating point (because it is most efficient) in some places you might expect fewer bits.
The 80 bit value log(8)/log(2) is probably a tiny amount less than 3. But when stored as 64 bit or 32 bit, it gets rounded to exactly 3.
I expect you compiled with less than max optimization, because the variable v seems to really exist as a 64 bit variable. More optimization could have eliminated v and used the 80 bit value instead.
The coded cast to double makes no difference, because the value log(8)/log(2) was already supposed to be a double and the fact that this double is in an 80 bit register is a quirk of the architecture.
The conversion directly from 80 bit floating point to 64 bit integer preserves the fact that the value is less than 3.
The cast to 32 bit float is a true conversion, so the rounding to exactly 3 occurs.
I think there are some compile time options that will generate slower code that covers most cases where 64 bit doubles would produce a different result than 80 bit (forces extra rounding of 80 bit values to 64 bit).
But, this type of problem is not limited to the 80 bit quirk of 32 bit x86 architecture. Similar problems are possible even in architectures with perfect IEEE 64 bit floating point. You should not rely on floating point computations on non integer values rounding back to exactly the correct integer. They might do so, but they might not.
log(8) cannot be exactly represented in any number of bits in floating point, nor can log(2). The approximation of log(8) has a good chance of not being exactly equal to 3 times the approximation of log(2). In such cases, the division should have about one chance in four of producing a result less than the correct integer. In simple cases like this, rounding to a floating point representation with at least 2 fewer bits (80 to 64 or either 80 or 64 to 32) will get you the exact correct integer result. But even that rule won't hold for slightly more complicated cases.
I think the difference between your cases 1 and 2 only occurs as a result of the 80 bit oddity in x86 architecture. But the difference between 2 and 4 can be a fundamental characteristic of floating point.
So with this or some other division that algebraically equals exactly 3, done on some architecture (or with some command line switches) to produce IEEE perfect results, you would find cases 2 and 3 are still what you consider "not working" but that case 1 joins them and is also "not working".
Last edited by johnsfine; 06-22-2010 at 09:57 AM.
|
|
2 members found this post helpful.
|
06-22-2010, 04:48 PM
|
#3
|
Senior Member
Registered: Dec 2005
Location: Campinas/SP - Brazil
Distribution: SuSE, RHEL, Fedora, Ubuntu
Posts: 1,508
Rep:
|
Wow !
|
|
|
06-22-2010, 05:52 PM
|
#4
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,286
|
For a non-architecture specific analysis, go get a beverage and have a read of this ...
|
|
|
06-23-2010, 02:16 AM
|
#5
|
Senior Member
Registered: Oct 2005
Distribution: Gentoo, Slackware, LFS
Posts: 2,248
|
@johnsfine No doubt now. You really are a genius!
@syg00 That's quite a rare helpful document. Now adding it to my collection with MAF :-).
|
|
|
All times are GMT -5. The time now is 03:58 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|