#define a float : which precision?
Hello,
for a small project I need to #define some constants of the note frequencies. So I build an even smaller C program that calculates the freqs as doubles, and have to print to stdout to get my #define file. But printf("#define SI_BEMOLLE\t%.16f", val) gives me 16 digit of precision, 32 gives me 32, so, when I have to stop? Thanks, tano 
Hya,
If I understand your question correct, the answer is whenever you are satisfied. Say, val=1.234e40, what you have is only a lot of zeros. If you want to resolve 1.234567890123456 and 1.234567890123457, you need to code as accordingly. Also, precision of double is implementation dependent. Happy Penguins! 
Quote:

Ok, take the question in a more abstract way: if I have to define a constant that is
periodic, like 3.33333333.... how can I know how many digits I have to put in my #define for my pc? 
Quote:
Once you decide on your final precision then you have to allow for roundoff error. Every intermediate calculation in your program is subject to roundoff error. The more calculations you have the greater the roundoff error in the final result. You need as precise a constant as needed to give good enough precision in your final answer. Presumably you will go through this exercise and determine that your variables need to be single precision floating point or perhaps double precision floating point. Then you set your periodic constant by dividing 10 by 3. Suppose you chose double precision floating point. Set up three floating point variables. Then initialize the first two variables to 10 and 3. Initialize the third variable to 10/3. The third variable will be repeating 3.33333 to the precision available in double precision floating point.  Steve Stites 
comment removed, because jailbait posted exactly same thing.

All times are GMT 5. The time now is 04:01 AM. 