LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   [C] scanf() read swrong floats; double goes fine (https://www.linuxquestions.org/questions/programming-9/%5Bc%5D-scanf-read-swrong-floats%3B-double-goes-fine-723793/)

ivanatora 05-05-2009 02:59 AM

[C] scanf() read swrong floats; double goes fine
 
Hello,
This is a strange problem and I hope someone has an explanation to it.
When I use scanf() to read floating point numbers, sometimes the last digit is wrong.
Code:

#include <stdio.h>
int main(void)
{
    float f=0;
    scanf("%f", &f);
    printf("%f\n", f);
    return 0;
}

Here is example of working executable (note that only certain numbers are mistaken, not all):
Code:

Press ENTER or type command to continue
99.1
99.099998
Press ENTER or type command to continue
89.99
89.989998
Press ENTER or type command to continue
12.88
12.880000
Press ENTER or type command to continue
333.333
333.333008
Press ENTER or type command to continue
1313.1313
1313.131348
Press ENTER or type command to continue
99999.1
99999.101562

If I change float to double (and %f to %lf, respectively) every value is read correctly.
I know in GCC float is 4 bytes, but how much bytes are for the integer part and how much are for the fractional part of the number?
If there is at least 1 byte for the fractional part that gives the precision of about 0.004 that is more than enough for storing the number "99.1" correctly. Apparently even that small number is wrong.

Sergei Steshenko 05-05-2009 03:36 AM

Quote:

Originally Posted by ivanatora (Post 3530498)
Hello,
This is a strange problem and I hope someone has an explanation to it.
When I use scanf() to read floating point numbers, sometimes the last digit is wrong.
Code:

#include <stdio.h>
int main(void)
{
    float f=0;
    scanf("%f", &f);
    printf("%f\n", f);
    return 0;
}

Here is example of working executable (note that only certain numbers are mistaken, not all):
Code:

Press ENTER or type command to continue
99.1
99.099998
Press ENTER or type command to continue
89.99
89.989998
Press ENTER or type command to continue
12.88
12.880000
Press ENTER or type command to continue
333.333
333.333008
Press ENTER or type command to continue
1313.1313
1313.131348
Press ENTER or type command to continue
99999.1
99999.101562

If I change float to double (and %f to %lf, respectively) every value is read correctly.
I know in GCC float is 4 bytes, but how much bytes are for the integer part and how much are for the fractional part of the number?
If there is at least 1 byte for the fractional part that gives the precision of about 0.004 that is more than enough for storing the number "99.1" correctly. Apparently even that small number is wrong.

Don't even start dealing with floating point numbers (both 'float' and 'double') until you learn the basics of computer arithmetic.

Remember that a (non)periodic fraction in one radix is not necessarily the same in another. Modern computers by default do not count in decimal, they count in binary.

Start from here: http://en.wikipedia.org/wiki/Floating_point .

dmail 05-05-2009 04:56 AM

There is a simple question I ask people who think this is an error.
Can you accurately represent the value 1/3 in a base 10 system?

ivanatora 05-05-2009 05:45 AM

Quote:

Originally Posted by dmail (Post 3530579)
Can you accurately represent the value 1/3 in a base 10 system?

Actually, no. 0.33 (3 in period) is close but still not 1/3.
0.1 in decimal is 0.000110011 (0011 in period) in binary. So the precision is lost when converting it back to decimal? Maybe it is not exactly periodical and it is solved in more bits, is that why putting it in double works?
Other numbers that appeared wrong are also presented in more bits. This should be the reason of the precision drop when using floats - part of the significant bits are lost.

dmail 05-05-2009 06:09 AM

Quote:

Originally Posted by ivanatora (Post 3530624)
Actually, no. 0.33 (3 in period) is close but still not 1/3.
0.1 in decimal is 0.000110011 (0011 in period) in binary.

0.1 also can not be accurately represented in base 2, it would be
[0][01111011][110011001100110011001100]
Where the boxes represent different parts of the representation of the number in IEEE754. The first being the sign, next the exponent and last the mantissa. You can see the mantissa is a repeating sequence just like 0.33r has a repeating sequence.
Quote:

So the precision is lost when converting it back to decimal?
No the precision is lost due to the base system not being able to store the number accurately.
What Every Computer Scientist Should Know About Floating-Point Arithmetic

ivanatora 05-05-2009 08:58 AM

Thanks! Everything now is crystal clear.


All times are GMT -5. The time now is 12:08 AM.