difference between int64_t on unix and linux
I have found a bug in our code.
The line is:
int64_t iBase = (int64_t) (d1 / d2);
where d1 and d2 are doubles.
if d1 = 3000 and d2 = 0.01,
for Unix, iBase is calculated as 299999
but for Linux, iBase is calculated as 300000 (which is correct!)
Does anyone know why this is happening or where I can find the definition for int64_t on Linux?
Cheers, Dan.
|