Home Forums HCL Reviews Tutorials Articles Register Search Today's Posts Mark Forums Read
 LinuxQuestions.org IEEE 754 floating point numbers
 Programming This forum is for all programming questions. The question does not have to be directly related to Linux and any language is fair game.

Notices

 02-04-2006, 09:09 AM #1 dmail Member   Registered: Oct 2005 Posts: 970 Rep: IEEE 754 floating point numbers I have been looking at an optimised way of sending the value of floating point numbers over an network in a cross platform way. So that if sending a normlised vector then there is no need to send 32 bits for each the i j and k values as this could be compressed depending on the accuracy required. Floating points are standardised by IEEE 754 as: Code: ```S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM 0 1 8 9 31``` where S is the sign bit E are the exponent and M the mantissa whichs gives the forumla Code: `(-1)^S * 1.M * 2^E-127` The standard C header file "math" gives a function for calculating this,ldexp(), yet the result I get is not the exspected one. For the floating point number of 3.7 the binary representation is: Code: ```0 10000000 11011001100110011001101 S EEEEEEEE MMMMMMMMMMMMMMMMMMMMMMM``` the resulting sum should be Code: `1 * ldexp(1.11011001100110011001101,128 -127)` but gives the result 2.22022? Can someone see what I am doing wrong? Cheers.
02-04-2006, 02:57 PM   #2
dmail
Member

Registered: Oct 2005
Posts: 970

Original Poster
Rep:
oops

Code:
`1 * ldexp(1.11011001100110011001101,128 -127)`
it should be
Code:
`1.0+(1/2)+(1/4)+0+(1/16)...etc`
and theres a drop in precision between 0.1 and 0.2 which I dont think is acceptable. ;(

<edit>
although after doing this I have come accross some info which is what i was trying to do
Quote:
 If you know that three of your floats are components of a normalized vector (i.e. they can only be between [-1,1]), and you want full float precision, then you can bump the float into the range of +-[1,2] and safely ignore the entire exponent block for those floats (8 out of 32 bits each) during network transmission. That's a whole byte lopped off for each of those floats, with no loss of precision. If you ARE willing to lose precision, you can save even more. Once again, the key here is to know your data.
</edit>

Last edited by dmail; 02-04-2006 at 03:34 PM.

 02-05-2006, 02:52 AM #3 ppanyam Member   Registered: Oct 2004 Location: India Distribution: Redhat Posts: 88 Rep: If you go ahead with your plan of optimized data transfer across the network, how much are you gaining? You have to write code for optimized compression on one side and decompression on the other side( if I can use those words), then test them, test them and test them... Then something changes and it doesnt fit your scheme of things... May the data needs to be read with some other application.. Yes such things were needed in olden days when systems were not so rich in resources. But nowadays? It is better to stick to standards, at least for atomic level data! Maybe I am wrong, but I cant get myself to do it right now. And I do handle terabytes of data!
 02-05-2006, 06:10 AM #4 dmail Member   Registered: Oct 2005 Posts: 970 Original Poster Rep: I should point out this is for a fast paced game, so time is critical and the network is the bottle neck. For a normalised 3D vector there is a saving of 3 bytes, chopping a byte of each float. The game has a network thread that doesn't convert back to floats or vise vesra just reads in the data and passes to the game(if needed) or gets the data from the game, so theres no time taken away from the network thread to interpret the data.