ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I need high precision timer which returns the accurate value
till 99.99 percentile. I am running clock_gettime(MONOTONIC_RAW,..)
in a tight loop and measuring the timing difference between consecutive calls.
For 99.8 percentile I see timediff of 100 nano sec but then there are few values(5 out of 10000 calls) which are in millisec. At this moment I am clueless what may be the reason behind it.
I am using linux debian kernel 2.6.28, with core scheduling on core 0. The timing measurement are done with program tied to the other core. I have also disabled ntp.
Any possible explanation what may be the reason for such measurement
outliers?
I need high precision timer which returns the accurate value
till 99.99 percentile. I am running clock_gettime(MONOTONIC_RAW,..)
in a tight loop and measuring the timing difference between consecutive calls.
For 99.8 percentile I see timediff of 100 nano sec but then there are few values(5 out of 10000 calls) which are in millisec. At this moment I am clueless what may be the reason behind it.
I am using linux debian kernel 2.6.28, with core scheduling on core 0. The timing measurement are done with program tied to the other core. I have also disabled ntp.
Any possible explanation what may be the reason for such measurement
outliers?
Thanks !
What assumption do you have WRT time measurements and program execution time ? I.e. do you expect execution time of the same program/piece of code to be the same from call o call ? What do you know about how computer HW works ? Do you understand hoW OS works WRT task switching ?
In this context, what I am measuring is the overhead of syscall clock_gettime(). I am pining
the measurement program to a given core. I have restricted kernel scheduling to just one core (not same as measurement
program core). By this I am making sure that my task is not rescheduled.
I think I should pose my questions differently, what is the possible explanation of this variation and what could be
done to minimize it?
The variation from nanosec to millisec will have huge impact if you are taking real time measurements.
In this context, what I am measuring is the overhead of syscall clock_gettime(). I am pining
the measurement program to a given core. I have restricted kernel scheduling to just one core (not same as measurement
program core). By this I am making sure that my task is not rescheduled.
I think I should pose my questions differently, what is the possible explanation of this variation and what could be
done to minimize it?
The variation from nanosec to millisec will have huge impact if you are taking real time measurements.
Regarding
Quote:
I am using linux debian kernel 2.6.28, with core scheduling on core 0. The timing measurement are done with program tied to the other core.
- does this mean that the program doing measurements is never suspended/resumed by kernel ?
My understanding of the quote is that kernel is pinned to one core, and your program to the other, but IMO this doesn't mean your program can not be suspended/resumed. Suppose your program needs IO, an the IO device is busy - your program will be suspended by kernel until IO is available. Also, I think that pinning a program to a core does not prevent other programs from using the same core. I.e. I think that pinning a program to a core reduces task switching overhead of the pinned program, but does not prevent other programs from using the same core.
Now, the function you've chosen, according to my understanding of its manpage, gives you wallclock duration (and not CPU cycles duration). So any task suspension/resumption can cause such and bigger variations.
[QUOTE=Sergei Steshenko;4294292]Regarding
- does this mean that the program doing measurements is never suspended/resumed by kernel ?
Sorry for not being clear earlier, I have excluded all but one of the CPU cores from the default scheduler, and ensured that the measurement code runs on the other core. That excludes the possibility of that core being used by other
program or the program being suspended/resumed.
...That excludes the possibility of that core being used by other
program or the program being suspended/resumed.
So, again, suppose your supposedly excluded program executes a, say' 'printf' statement, which ultimately translates into a system call. Since you have a system call, how can you prevent the system (kernel) from suspending/resuming your task ?
I am writing my conclusions based on my general understanding how things work; my understanding can be wrong. If you think your understanding of why/how you can achieve the sate that your task is neither suspended no resumed id correct, could you point me to some documentation stating that ?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.