ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a cute suggestion on deciding whether the two solutions are the same or different in a less rigorous but much more user friendly manner:
Select one of the simple means of converting the floating point to string (I think using a string stream is the best of those). Convert both solutions to string and compare the two strings. If the strings differ then from the user's point of view the numbers differ. If the strings are the same, then the user would consider the program flawed for thinking the numbers differ.
Having converted those numbers to strings once, do not convert them again by printing the numbers in the final output. Instead print those strings. Avoid errors by avoiding doing the same job twice, and especially avoid depending on the same job twice producing the same result. Floating point in optimized code often produces different results when the same computation appears in two different sections of code.
Not sure if floating point suffers the same things, but for integers, some issues you have to watch out for are the types of operations you are performing. Because when you divide or multiply this causes shifting and therefore loss of precision. For instance if you divide by 32 you shift the variable to the right by 5 bits and then lose whatever information was in bits 0-4 and you can never get it back.
I know that we have a conversion algorithm where we represent it as integers, scale the numbers up by about a factor of 4096 and when we perform multiplications and divisions of them, get our interim results, we end it by dividing back by 4096 to so that the placement of the least significant digit was retained. We then convert to floating point.
As far as another question you asked where the prof discussed solve_linear() and solve_quadratic(), they are suggesting that you use functions because it separates functional code for the benefits of readability and modularity in your programming. This way you can use the general functions solve_linear() and solve_quadratic() many times over without having to rewrite the code.
Not sure if floating point suffers the same things, but for integers, some issues you have to watch out for are the types of operations you are performing. Because when you divide or multiply this causes shifting and therefore loss of precision. For instance if you divide by 32 you shift the variable to the right by 5 bits and then lose whatever information was in bits 0-4 and you can never get it back.
Floating point is a log base 2 format. Loss of precision depends on the size of the number... and some things just don't work - such as adding 1.0 to a very large number. As the value gets larger, there become larger and larger gaps between the values - which is where roundoff causes problems, adding one to a value SHOULD increase it - but if the increased value is inside one of the gaps, it can get truncated.
Quote:
I know that we have a conversion algorithm where we represent it as integers, scale the numbers up by about a factor of 4096 and when we perform multiplications and divisions of them, get our interim results, we end it by dividing back by 4096 to so that the placement of the least significant digit was retained. We then convert to floating point.
Sometimes that conversion becomes "inexact" It depends on the magnitude as to how much of a problem it is.
Not sure if floating point suffers the same things, but for integers, some issues you have to watch out for are the types of operations you are performing. Because when you divide or multiply this causes shifting and therefore loss of precision. For instance if you divide by 32 you shift the variable to the right by 5 bits and then lose whatever information was in bits 0-4 and you can never get it back.
You don't have to worry about "losing" anything when multiplying or dividing a float, you only have to worry about it when you're adding/subtracting, as jpollard mentioned.
eg 1: ((.001 * 1e10) / 1e10) will still be .001
eg 2: ((.001 + 1e10) - 1e10) will not be .001
Last edited by suicidaleggroll; 02-17-2015 at 11:08 AM.
You don't have to worry about "losing" anything when multiplying or dividing a float, you only have to worry about it when you're adding/subtracting, as jpollard mentioned.
eg 1: ((.001 * 1e10) / 1e10) will still be .001
eg 2: ((.001 + 1e10) - 1e10) will not be .001
Depends on the optimization... The compiler can eliminate the add/subtract/multiply/divide of 1e10.
But hiding the values in different variables (and passing them via function calls) should defeat the compiler optimizations.
At one time, there was an IBM paper that analyzed different CPU implementations of floating point (and unfortunately, I can't find it now), and the errors that could accumulate. Several of the plots were for computing a spiral via successive arithmetic (I think it was a geometric spiral) where the exact same program was used on different computers from different manufacturers. Each one had the same spiral in the beginning - but after a certain number of iterations each computer would go off the deep end at different iterations, and in different ways. I believe all were using IEEE floating point - the errors would be in different optimizations and other individual differences of implementation.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.