Ok, so for a class I am debugging a code snipet that compares the gettimeofday() function to the TSC register values. The idea is to grab a value of each, sleep for X amount of microseconds and them grab another value of each. Then the difference between the two, before and after, of each call is analyzed. (delta of gettimeofday compared to delta of TSC). So, when running the code, it makes sense and appears to be similar, the two deltas differ by only about 0.005%, which is normal. (on a Redhat-based machine)
However, when I try to run this code on a different Linux machine (FC7), the deltas are very sporadic and the percent different can range from 2% to 80%. This makes no sense to me and don't know why it is different. I ran it on a third platform (FC4) to see what's going on and this one was somewhat sporadic but the percentages were lower, like around 5% usually and more precise than the previous.
The following is a snipet of the main code so you can get the idea of what is going on:
Code:
main()
{
struct timespec dt;
struct timeval bt, et;
tsc_t tsc_bt, tsc_et;
double delta_t, tsc_delta_t;
int i;
printf("CPU clock: %.2f MHz\n", Init_TSC() / 1000000.0);
for (i = 100; i < 500000; i += 100) {
dt.tv_sec = 0;
dt.tv_nsec =i * 1000;
gettimeofday( &bt, NULL );
Read_TSC( tsc_bt );
nanosleep( &dt, NULL );
Read_TSC( tsc_et );
gettimeofday( &et, NULL );
delta_t = (et.tv_sec - bt.tv_sec) ;
delta_t += (et.tv_usec - bt.tv_usec);
tsc_delta_t = Convert_TSC( tsc_et - tsc_bt, TSC_Microseconds );
printf("Requested %6d uS sleep: actual %8.3f, tsc %8.3f, delta %.6f, %.3f%% err\n",
i, delta_t, tsc_delta_t, delta_t - tsc_delta_t,
(fabs(delta_t - tsc_delta_t) / delta_t) * 100.0);
}
}
Convert_TSC() just converts the tick value from the TSC and converts to the time specified.
Thanks guys for any help!