Dear chief_officer,
The moment I saw your code involving comaring doubles, I knew where it will lead to.
I am also from an era of main frame with an array processor which was slower than a 486 running dos. And dozens of guys used to work on it! First thing they taught us there was not to waste time comparing floats without first rounding off.
Like jinkels said, it is all about how well math libraries are implemented in a language. Some languages are better than others. Even in Fortran, there are differences in different compilers. And IBM has developed a more accurate library etc.
The following link is a must for anyone doing scientific computing.
http://en.wikipedia.org/wiki/Floating_point
I like thia part particularly:
Quote:
Floating point arithmetic is not associative. This means that in general for floating point numbers x, y, and z:
Floating point arithmetic is also not distributive. This means that in general:
In short, the order in which operations are carried out can change the output of a floating point calculation. This is important in numerical analysis since two mathematically equivalent formulas may not produce the same numerical output, and one may be substantially more accurate than the other.
For example, with most floating-point implementations, (1e100 - 1e100) + 1.0 will give the result 1.0, whereas (1e100 + 1.0) - 1e100 gives 0.0.
|
Finally, java has third party stuff for advanced math libraries. There are lot of articles at javaworld for java for scientists and engineers.
ppanyam