ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am starting to learn signal handling in Linux and have been trying out some simple codes to deal with SIGALRM. The code shown below sets a timer to count down. When the timer is finished a SIGALRM is produced. The handler for the signal just increments a variable called count. This is repeated until the user hits ‘q’ in the keyboard. The code is shown below:
timer.it_interval.tv_sec = 0; //Deal only in usec
timer.it_interval.tv_usec = 1000;
timer.it_value.tv_sec = 0; //Deal only in usec
timer.it_value.tv_usec = 1000;
The problem I am facing is this, when I set the timer for 1000000usec it works fine (i.e 1sec). However if I keep reducing the usec time to 100000, 10000, 1000 etc the timing seems to be too slow. The count variable is not being incremented as fast as it should be. Why is this? I have a hunch I am doing some silly mistake here but I am not sure what it is.
"Timers will never expire before the requested time, but may expire some (short) time afterwards, which depends on the system timer resolution and on the system load."
There is still more useful information in the man page, but why quote the whole man page.
"Timers will never expire before the requested time, but may expire some (short) time afterwards, which depends on the system timer resolution and on the system load."
There is still more useful information in the man page, but why quote the whole man page.
What i gather from the man pages is that any time function (sleep(), usleep(), settimer etc) will never expire precisely at the specified point, it will expire slightly afterwards.
I ran some test code using usleep and found some interesting results.
When i put usleep to 0.1secs, the actual sleeping time varies from the set point (0.1secs) with an error of about +5%. The same error percentage jumps to about +40% when i put usleep to 0.01sec and the same error jumps to about +300%(!!!!) when i put usleep to 0.001sec.
Looks like as the timer resolution is brought down the error keeps increasing. Could be due to system timer resolution as mentioned. This could explain why settimer was behaving strangely for low values of time.
Yep, "system timer resolution" is what you need to check. Also remember other progs are running, so can affect the elapsed or 'wallclock' resolution observed .
The kernel's timer is updated via an interrupt and perhaps a 'tasklet' is set to send signals to processes waiting for SIGALM. Alternatively, the tasklets are set when the scheduler is running - I don't know what the case is and I don't want to look at the source.
The scheduler is also run at regular intervals via an interrupt - this rate may vary from 100 times per second to 1000 times per second on a typical x86 kernel. It is the job of the scheduler to deliver interrupts to processes (since the processes need to respond to the interrupt within a process context). So even if the internal timer had very good resolution (like 1ns resolution), two remaining issues are: how often is the timer updated and how often is the scheduler invoked. There's the other catch - your timer can have nanosecond resolution (make it picosecond even, if you wish) but if it is only updated every 0.01s then the resolution is rather pointless. In addition to all this, the scheduler may delay the execution of your process, which means it won't see the signal until the scheduler runs it again - whenever that may be. You can see the effect of scheduling delays by running a dummy program with high priority and then running your SIGALM program with fairly low priority. This is the "and CPU load" delay mentioned in the man pages.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.