LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 11-28-2012, 03:36 PM   #1
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
precise timing knowledge in C


I have a C program that's interfacing with some hardware on a Linux box. This program doesn't need precise timing control, but it does need precise timing knowledge.

Ideally what I want is for the program to issue a command and start a timer, then some time later (less than 3 minutes, don't have to worry about clock drift affecting the results too much), some data will come in, and I need to know exactly how long it's been since the timer was started, to an accuracy of a few milliseconds ideally.

This could be easily accomplished by using the clock functions to store the system time when the command was issued, and difference the system time when the data is collected, however this is susceptible to clock changes between these two events. If, for example, NTP fires up and changes the system clock by 5 seconds, my calculated time difference will be 5 seconds off, which I can't have.

Is there any other timing control in C that I could use for this? Some kind of tic/toc operation that will not be affected by changes/updates to the system clock?
 
Old 11-28-2012, 04:53 PM   #2
JohnGraham
Member
 
Registered: Oct 2009
Posts: 467

Rep: Reputation: 139Reputation: 139
Quote:
Originally Posted by suicidaleggroll View Post
This could be easily accomplished by using the clock functions to store the system time when the command was issued, and difference the system time when the data is collected, however this is susceptible to clock changes between these two events. If, for example, NTP fires up and changes the system clock by 5 seconds, my calculated time difference will be 5 seconds off, which I can't have.
You can ask clock_gettime() to use CLOCK_MONOTONIC in order to avoid these issues - as its name suggests, it's never affected by changes in system time.
 
2 members found this post helpful.
Old 11-28-2012, 07:42 PM   #3
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
I think the key phrase in all of this is 'some data will come in'. What does 'come in' mean? What will your program/hardware/firmware/whatever do to determine that the data has come in? Does the arrival of the data stop the clock? To some degree, the same applies to the phrase 'issue a command'.
Usually, to measure time accurately, you want a hardware timer that gets gated or triggered by an edge on one or more digital inputs, and is clocked by some accurate clock signal. The actual starting and stopping of the timer ideally does not require any software, other than configuring and arming the timer. Software can poll the timer periodically to determine whether the start and stop events have occurred, or to get interval times. Relying on software to start and stop a timer will probably not be reliable to within a few milliseconds; certainly not unless the kernel has been built with 1KHz tick time (CONFIG_HIGH_RES_TIMERS=y in kernel config).

--- rod.
 
1 members found this post helpful.
Old 11-28-2012, 08:34 PM   #4
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Original Poster
Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by JohnGraham View Post
You can ask clock_gettime() to use CLOCK_MONOTONIC in order to avoid these issues - as its name suggests, it's never affected by changes in system time.
Thanks for the suggestion, I'll start experimenting with that when I'm back in the office tomorrow.
 
Old 11-28-2012, 08:44 PM   #5
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Original Poster
Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by theNbomr View Post
I think the key phrase in all of this is 'some data will come in'. What does 'come in' mean? What will your program/hardware/firmware/whatever do to determine that the data has come in? Does the arrival of the data stop the clock? To some degree, the same applies to the phrase 'issue a command'.
Usually, to measure time accurately, you want a hardware timer that gets gated or triggered by an edge on one or more digital inputs, and is clocked by some accurate clock signal. The actual starting and stopping of the timer ideally does not require any software, other than configuring and arming the timer. Software can poll the timer periodically to determine whether the start and stop events have occurred, or to get interval times. Relying on software to start and stop a timer will probably not be reliable to within a few milliseconds; certainly not unless the kernel has been built with 1KHz tick time (CONFIG_HIGH_RES_TIMERS=y in kernel config).

--- rod.
Thanks for the inquiry

Here's a high level description of the system:

There are two hardware devices, a signal generator and an A/D. The computer sends a signal to the siggen to start a frequency sweep via RS232, then samples the A/D for the next 2.5 minutes. After 2.5 minutes, the computer tells the siggen to restart the sweep and starts the process over again. This cycle repeats indefinitely.

There is a server process on the computer reading data from the A/D in real time, filling a 128 sample buffer. When the buffer is filled, an interrupt flag is set (48khz sampling means the flag will be set every ~2.7ms). There is a client process monitoring the status of this flag, when the flag gets set the client pulls in the data and appends it to an array. When the array is filled to the specified length (.5-1 sec of data, configurable), some processing is performed and a value is generated. This value is written out to a data file, along with the time since the latest sweep began.

The focus here is properly calculating this "time since the latest sweep began". I will measure the delay between the RS232 "sweep start" instruction and the actual start of the sweep. I will also measure the delay between when the data is sampled and when it makes it into the client process. Beyond that, the issue is simply a matter of recording the time between when the "sweep start" instruction is sent and when the .5-1 sec buffer is filled and the processing on it begins. In the OP I said the requirement is "a few milliseconds", but it really isn't that stringent. I'd say anything less than 100ms is good, and less than 250ms is acceptable. Beyond that, the timing error will create a bias in the data that skews the results to the point that it's unusable.

This system is just a proof-of-concept. When/if the system goes operational, it will be running on an embedded device (not running Linux), and will most likely employ an FPGA for precise timing control to well under 1ms. However, in the mean time, I can't have any unpredictable system clock changes causing 500+ms errors in the calculated "time since sweep began", hence this thread.

Unfortunately this is one of those cases where I have 1 week to generate a proof-of-concept system. The results of which will be used to demonstrate the feasibility of and write the proposal for the follow-on project to build the actual system, where I'll have 6+ months to develop and finalize the software before it goes operational. This means that some shortcuts and assumptions must be made in this proof-of-concept, but as long as they don't affect the results to the point where they're unusable, it's alright. Basing the timing off of the system clock, with it's unpredictable updates, is one of those problems that would contaminate the results. Disabling system clock updates by turning off NTP, etc. would relieve the problem in the short-term, but long-term time-stamping would being to suffer as the clock drifts, so I'd rather not rely on that.

Last edited by suicidaleggroll; 11-28-2012 at 08:51 PM.
 
Old 11-29-2012, 08:48 AM   #6
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Okay, that's a much clearer picture. The act of sending a command via RS-232 is difficult to define as an instant in time, since serial communications takes some finite time, and there can be indeterminate delays as the OS buffers and sends the data. It looks a lot like the crux of the problem is that you're trying measure a delta-time where the start and stop events are determined in two different processes. In a single process, it would be easy to use clock_gettime() or similar system call at each event, and computing the delta at the terminating event. The solution to that problem seems to be simply using interprocess communication of some sort (messages queues seem appropriate, since that also provides a measure of synchronization), so your server process can hand over the start time to the client process, which can then compute the delta. Or, are the two processes even on the same host?

Is your ADC also a remote device, or is it on a locally attached bus such as PCI or VME? If locally attached, can you not use the pacing of the ADC as a timing mechanism? You specify a particular conversion rate of 48KHz, so I assume it should be possible to count conversions to measure time.

In practice, I have used gettimeofday() (didn't know about clock_gettime() until reading this thread) to measure interval timing of messages received on a CANbus network. The results were very consistent with timing information I acquired using other means, and I did not observe any abrupt time shifts. My theory is that once NTP has established the local time setting, the adjustments that are made thereafter are very small, although I cannot quantify what I mean by 'small', but guesstimate it to be in the range of a small number of milliseconds at a time.

--- rod.
 
1 members found this post helpful.
Old 11-29-2012, 12:43 PM   #7
JohnGraham
Member
 
Registered: Oct 2009
Posts: 467

Rep: Reputation: 139Reputation: 139
Quote:
Originally Posted by theNbomr View Post
My theory is that once NTP has established the local time setting, the adjustments that are made thereafter are very small, although I cannot quantify what I mean by 'small', but guesstimate it to be in the range of a small number of milliseconds at a time.
From what I remember, if the time difference is small enough then the NTP daemon uses adjtime() instead of settimeofday(), which means there are no sudden "jumps" in the time (assuming the NTP servers work correctly). Still, I always just use the monotonic clock for this as any potential problems just disappear.
 
Old 11-29-2012, 03:23 PM   #8
theNbomr
LQ 5k Club
 
Registered: Aug 2005
Distribution: OpenSuse, Fedora, Redhat, Debian
Posts: 5,399
Blog Entries: 2

Rep: Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908Reputation: 908
Quote:
Originally Posted by JohnGraham View Post
From what I remember, if the time difference is small enough then the NTP daemon uses adjtime() instead of settimeofday(), which means there are no sudden "jumps" in the time
I think that's a more refined version of what i was trying to say. Do you have any idea what magnitude 'small enough' might be?

--- rod.
 
Old 11-30-2012, 04:13 AM   #9
JohnGraham
Member
 
Registered: Oct 2009
Posts: 467

Rep: Reputation: 139Reputation: 139
Quote:
Originally Posted by theNbomr View Post
Do you have any idea what magnitude 'small enough' might be?
For ntpdate, half a second - I thought that was for ntpd as well, but I couldn't find reference to it. So if your system clock (or the NTP servers) are that far out, you'll get a sudden jump instead of a smooth adjustment (unless it's configured to always use adjtime(), but that can also produce weird results).
 
Old 11-30-2012, 10:01 AM   #10
suicidaleggroll
LQ Guru
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 5,573

Original Poster
Rep: Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142Reputation: 2142
Quote:
Originally Posted by theNbomr View Post
Is your ADC also a remote device, or is it on a locally attached bus such as PCI or VME? If locally attached, can you not use the pacing of the ADC as a timing mechanism? You specify a particular conversion rate of 48KHz, so I assume it should be possible to count conversions to measure time.
That's a fantastic idea, I can't believe I didn't think of it. The ADC is attached via firewire, and this is the approach I decided to take to handle the timing in the code. So far so good, the system seems to be running well. Now I just need to quantify some of these process times to add an approximate correction for them in the code (delay in the ADC samples, delay between triggering the sending of the "sweep start" RS232 message and the actual start of the sweep, etc).

Thanks for all the help.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
precise copy? aclhkaclhk Linux - Software 1 01-31-2011 02:24 AM
timing in c adixon Programming 11 11-16-2010 08:15 PM
More precise utility than du? snow_bound Linux - General 3 09-24-2006 04:40 PM
how to start timing and print the timing result on portions of java codes ?? alred Programming 2 05-15-2006 10:00 AM
timing in c++ deveraux83 Programming 2 04-20-2004 05:34 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 09:12 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration