ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
When i run this piece of code in Python (it must work on any programming language), i get the following results:
Code:
In [1]: for n in range (1,11):
...: n/10.0-n*0.1
...:
...:
Out[1]: 0.0
Out[1]: 0.0
Out[1]: -5.5511151231257827e-17
Out[1]: 0.0
Out[1]: 0.0
Out[1]: -1.1102230246251565e-16
Out[1]: -1.1102230246251565e-16
Out[1]: 0.0
Out[1]: 0.0
Out[1]: 0.0
As you can see, for n from[3,6,7], there is a small difference between n*0.1 and n/10.0. Well, I understand that this is caused by uncertainities of transforming from decimal to binary code, but why only for n=[3,6,7] do I get these results??? I tried a small "trick", but it didn't work:
Code:
In [5]: for n in range (10000):
x=x+3/10.0
y=y+3*0.1
...:
...:
In [8]: x
Out[8]: 3000.0000000003583
In [9]: y
Out[9]: 3000.0000000003583
Do you have other ideas how to show the difference between these two numbers? Thanks in advance for your ideas
As you can see, for n from[3,6,7], there is a small difference between n*0.1 and n/10.0. Well, I understand that this is caused by uncertainities of transforming from decimal to binary code,
I wouldn't call it uncertainties. There is no way to exactly represent 0.1 in binary. Even if every implementation used the exact same approximate value for 0.1, it still wouldn't exactly equal 0.1.
Quote:
but why only for n=[3,6,7]
Why not only 3, 6 and 7? You should expect some cases to round the same on both sides and some not.
Consider 1/10, 2/10, 4/10, and 8/10. If you understand binary floating point, you should expect those to all round consistently compared to each other. If 1/10 is the same approximation as 0.1, you can be very sure 2/10 must be the same approximation as 2*0.1.
It is also not surprising that 1/10 is the same approximation as 0.1.
We also know (from the nature of floating point) that if 3/10 is not the same approximation as 3*0.1 then 6/10 cannot be the same as 6*.01.
We know that 5/10 must be exactly 0.5. Maybe it is possible that 5 times the best approximation of 0.1 would not round back to exactly 0.5, but apparently it does. Again we should be certain that 10*0.1 rounds in the same way as 5*0.1.
So the only independent cases are that 3 and 7 round wrong, while 9 and 11 round right.
I expect you could explain those precisely as well by looking at the expansion of 1/10 as a binary repeating number. But I don't have the patience at the moment.
Quote:
I tried a small "trick", but it didn't work:
Quote:
Do you have other ideas how to show the difference between these two numbers?
I can't tell what you're trying to accomplish with that second program, so I can't suggest a correction.
Here's a more concrete example, try running these two C programs:
Your first program has errors. Your second program just demonstrates the behavior the OP asked about.
Quote:
n=i/10.0-i*0.1;
The mantissa size of n is unimportant to the question. What is important here is the data type of 10.0 and 0.1.
I don't happen to know how you make 0.1 a correct long double. It is obvious how to cast 10.0 as a correct long double.
With 10.0 and 0.1 long doubles, the value of i/10.0-i*0.1 still must fit exactly a float, because it can't have many significant bits. It should have either zero or one significant bit.
Quote:
printf("%.40e\n", n);
I expect the garbage you got was because %e doesn't understand a long double.
What did you intend to demonstrate with the long doubles?
I haven't tried the following yet myself (so stupid typos are possible) but if you want to see how the original effect looks with a different number of mantissa bits, try:
Code:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
float n;
long double ten = 10.0;
long double tenth = 1 / ten;
unsigned int i;
for (i=1;i<=11;i++)
{
n=i/ten - i*tenth;
printf("%.40e\n", n);
}
return 0;
}
Well, we do at the university numerical analysis. And our teacher asked us to explain why for n=[3,6,7] we get that n/10 is different from n*0.1.. I yet can't force myself to think about numbers as binary, so it's hard for me to understand this.
About that small trick I tried. I thought that if there is a difference between 3/10 and 3*0.1, then if I would multiply them a with a big number (well, I added them for "cleanliness of experiment"), this difference would somehow show up. If 3/10 can be written as 3*0.1+x, I thought that after a big number of iterations this X would show itself, and 3/10 added with itself 1000 times would be different from 3*0.1 added with itself 1000 times. But this didn't happen, they were both the same.
Thank you very much for your answers, you were very helpful.
our teacher asked us to explain why for n=[3,6,7] we get that n/10 is different from n*0.1
Then you should look at it to that level of detail.
Quote:
I yet can't force myself to think about numbers as binary
But you need to, in order to do what your instructor asked.
First rule in floating point is powers of two are elsewhere, so 1/10 has the same mantissa as 1/5.
For the mantissa, it might help to consider the following:
If X = 1/5 then X = (3 + X)/16
from which you should be able to understand that in hex 1/5 is .3333...
From which you should understand that in binary 1/5 is
.00110011001100110011...
So in floating point 1/10 or 1/5 or 4/10 or 8/10 are each some power of two times 001.10011001100110011...
But for this kind of question you need to focus on the rightmost end, so you might think 1/5 is 001.10011...00110011
but that would only be true if the number of mantissa bits mod 4 is 1.
If mantissa bits mod 4 is 0: 1/5 is 001.10011...0011001
If mantissa bits mod 4 is 3: 1/5 is 001.10011...001100
If mantissa bits mod 4 is 2: 1/5 is 001.10011...00110
(note IEEE floating point doesn't include the leading 1. in the stored mantissa bits. It is just implied.)
In binary, multiplying 0011 by 3 gives you 1001. So if we multiply those four versions of 1/5 by three we get 100.11001...10011001 100.11001...1001011 100.11001...100100 100.11001...10010
Notice how the last part has changed as a result of where it was cut off.
But floating point normalizes after multiplying, before storing the result, giving: 1.0011001...100110 1.0011001...10011 1.0011001...1001 1.0011001...101
Notice the rounding as two bits get lost on the right.
Edit: I wrote the above expecting to reach the point that the effect of the double rounding (0.1 is rounded, then 3*0.1 is rounded again) relative to the single rounding of 3/10 would be obvious. But it doesn't work out that way. The double rounding gets you to the same place as the single rounding assuming each operation gives you the closest answer based on its input.
So the cause is not exactly the one I expected. Looking at it from the opposite direction:
5.55...e-17 is two to the -54 power.
3/10 is two to the -2 power times 1.0011001...
64 bit floating point stores 52 bits to the right of the binary point. So when H_TEXMEX_H tried to use 32 bit float the actual computation occurred in 64 bit and
3/10 was less than 3*0.1 in the last bit.
Probably 3*.01 was as I computed above 1.0011001...10011
while 3/10 rounded incorrectly to 1.0011001...10010
But possibly 3/10 was correct and 3*.1 somehow rounded all the way up to 1.0011001...10100
Oops, my bad, you're right for printing a long double you need something like %Lf or %Le (checking my C notes here). You're also right about the calculation which uses doubles by default, here's a revised version of the second program:
Code:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[])
{
long double n;
unsigned int i;
for (i=1;i<=11;i++)
{
n=i/10.0L-i*0.1L;
printf("%Le\n", n);
}
return 0;
}
Still a rounding error it seems. In my C class they said to use doubles over floats because floats use less bits and there's a higher chance of a rounding error, like those you described. However, it seems that even with long double there is still a rounding error.
Still a rounding error it seems. In my C class they said to use doubles over floats because floats use less bits and there's a higher chance of a rounding error
There should be the same chance of a rounding error. I think it is effectively accidental that there were 3 values out of 11 with rounding errors using 64 bit floating point, but only 1 error out of 11 with 80 bit floating point.
The difference in 32 bit vs. 64 bit or 64 bit vs. 80 bit is the size of the rounding error.
The 3 out of 11 rounding errors you saw with 64 bit floating point were each lsb errors, which is the 52'nd mantissa bit which is less than 2.22e-16 times the correct value.
With 80 bit floating point, your error in the value of .9 is 2 to the -64 power. Since .9 would be stored normalized as 1.8*2**(-1), the error was in the 63'rd mantissa bit instead of the 52'nd.
So by using 80 bit floats instead of 64 bit, you made those rounding errors that do occur 2048 times smaller than they would have been.
Notice that x86 architecture not using SSE floating point uses 80 bit for long double. By your results, I know you are using 80 bits. Other architectures (including x86_64) do not use 80 bits for long double. Notice also that sizeof(long double)*8 does not give you the number of bits used. It gives you at least the number of bits used for a long double, but maybe more.
Hmm, so then would there be any solution to this rounding issue ? I suspect not, you can probably just increase the number of bits used until the rounding errors do not affect the number you see or use, right ?
Hmm, so then would there be any solution to this rounding issue ? I suspect not, you can probably just increase the number of bits used until the rounding errors do not affect the number you see or use, right ?
For most mortals, getting 1E-16 instead of ZERO does not qualify as a problem........
For most mortals, getting 1E-16 instead of ZERO does not qualify as a problem........
Well, yes, of course, but you know, what if it had to be accurate for some scientific purpose. I'm not sure if that's what the OP was complaining about tho.
For most mortals, getting 1E-16 instead of ZERO does not qualify as a problem........
Quote:
Originally Posted by dracuss
university numerical analysis. And our teacher asked us to explain why for n=[3,6,7] we get that n/10 is different from n*0.1.
As someone who routinely must find and correct mistakes made by other professional programmers, I feel qualified to claim that it frequently matters that some tiny value, such as 1E-16 differs from zero.
More often, just the fact that it is different matters. How little it is different is not as important.
Consider this extreme example:
Code:
void test(float x)
{
float y = x / 3.0;
printf("This presence of this function call may change the result\n");
if ( y != x / 3.0 )
{
printf("Most people won't expect this result\n");
}
}
I've fixed a lot of bugs caused by using == or != with floating point values. If you use == or !=, it doesn't matter how tiny the difference might be. It only matters whether there is a difference.
Many of those differences come from using two different paths to what with real numbers must be the same value, such as the OP's example of x*.1 vs. x/10. But you can see the effects of rounding differences even on two values computed exactly the same way.
In examples similar to the one I gave above, internal details of the code generation by the compiler may determine whether you see the "unexpected" result. Because of the way it uses 80 bit floating point in places the programmer might expect 32 or 64 bit, the Win32 C and C++ environment has a stranger set of these behaviors than other environments (more often having a difference where other environments wouldn't, but also not having a difference where other environments would). In Linux, x86_64 will have fewer of these differences than x86. But even x86_64 will have cases where the exact same computation will compare unequal.
I don't know what level of understanding of these issues Dracuss's numerical analysis teacher is trying to instill. As I said above, most programmers make mistakes from simply failing to remember that there will be differences between multiple computations of what would be the same value if computations were done with real numbers.
A numerical analysis class should go much deeper to discuss the size of the initial differences and how those difference may grow as a result of further computations.
Quote:
Originally Posted by pixellany
For most mortals, getting 1E-16 instead of ZERO does not qualify as a problem........
Rarely, but sometimes, I work with simulations in which a difference between 1E-16 and zero in a final result is significant. Every day, I work with simulations in which a difference between 1E-16 and zero in an intermediate value would result in a very significant change in the final result.
You are right that in most places where floating point numbers are used, the difference between 1E-16 and zero doesn't matter. But that is no excuse for training programmers to fail to understand why the difference is there.
Last edited by johnsfine; 10-14-2009 at 02:41 PM.
Reason: typo
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.