[C] scanf() read swrong floats; double goes fine
Hello,
This is a strange problem and I hope someone has an explanation to it. When I use scanf() to read floating point numbers, sometimes the last digit is wrong. Code:
#include <stdio.h> Code:
Press ENTER or type command to continue I know in GCC float is 4 bytes, but how much bytes are for the integer part and how much are for the fractional part of the number? If there is at least 1 byte for the fractional part that gives the precision of about 0.004 that is more than enough for storing the number "99.1" correctly. Apparently even that small number is wrong. 
Quote:
Remember that a (non)periodic fraction in one radix is not necessarily the same in another. Modern computers by default do not count in decimal, they count in binary. Start from here: http://en.wikipedia.org/wiki/Floating_point . 
There is a simple question I ask people who think this is an error.
Can you accurately represent the value 1/3 in a base 10 system? 
Quote:
0.1 in decimal is 0.000110011 (0011 in period) in binary. So the precision is lost when converting it back to decimal? Maybe it is not exactly periodical and it is solved in more bits, is that why putting it in double works? Other numbers that appeared wrong are also presented in more bits. This should be the reason of the precision drop when using floats  part of the significant bits are lost. 
Quote:
[0][01111011][110011001100110011001100] Where the boxes represent different parts of the representation of the number in IEEE754. The first being the sign, next the exponent and last the mantissa. You can see the mantissa is a repeating sequence just like 0.33r has a repeating sequence. Quote:
What Every Computer Scientist Should Know About FloatingPoint Arithmetic 
Thanks! Everything now is crystal clear.

All times are GMT 5. The time now is 10:12 PM. 