LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   How to accurately measure time intervals? (https://www.linuxquestions.org/questions/programming-9/how-to-accurately-measure-time-intervals-742739/)

10110111 07-25-2009 10:45 AM

How to accurately measure time intervals?
 
I have some machines on which glxgears seems to work very strangely: it turns the gears for very little angles for some time (typically for 0.5 seconds), then the angle is burst changed, and again slow, and so on. At first look it seemed to just show about 0.5 fps, while console output said about 300FPS. But, when i commented "#define BENCHMARK" out in glxgears source, i got normal turning, though with wrong speed. So, i put printf("%lf",dt) in every frame output code, and got that the time difference looked like about 200 times of 0.000015 seconds, then 0.45xxxx seconds, then again those little numbers, and so on, which resulted in incorrect turning speed. So, is it a kernel bug in gettimeofday, or something else? What function is better to use to accurately measure time?

bastl 07-26-2009 06:59 AM

Linux is a multitasking system, so the kernel (initd) has also compute time and checks the system and also the other damons and threads.
A faster CPU, faster main memory,... can minimize that break, also less installed damons and running apps.

The delay times and the deviation seems normal to me.

10110111 07-29-2009 02:45 PM

>The delay times and the deviation seems normal
Yes, it is normal. But the problem is not in delays - it is in measurement. glxgears calls gettimeofday with almost equal intervals (really - i checked and didn't see any bursts like 0.5 sec), i.e. computed delay should be about 1/200 of a second instead of 0.000015 sec, and is seen on machines which work right. Also, there is almost no process activity except of glxgears on these machines, and result is fully reproducible. It seems that the measurement is made in some strange way or gettimeofday is not the function to rely on for accurate measurement.

bastl 07-30-2009 05:34 PM

In short:
Systemcheck is done 1/4 on older 1/3 second on newer kernels by default.
Each delay shorter than those will have a break. Because shorter delays are done by a software PLL that has also a break.
HPET/DMA and RTC/Timer can solve that break, but have to be programmed directly without system support. And they are also system dependend, so there can't be a standard library in c, c++ to support them.
Gears is an example and can only uses std-libs, to run on many computers as possible.
So you can even say that this is a hardware bug.
On RT- and multiprosessor-systems you won't see or detect such a break.
OpenGL is the answer to that hardware bug - it is also a multi-processor system, because it runs on the second processor called GPU.
So Gears with OpenGL support won't have that break and will turn very smoothly.
To mesure delays there is only one way:
Programming a timer RTC/IRQ or HPET/DMA directly.
Additional information:
Writing or reading to/from a RTC port takes allways 1/1000000 second.
Debugging times on a multitasking system exactly isn't possible.


All times are GMT -5. The time now is 03:48 AM.