CPU time Vs WALL time, Sometimes Walltime lesser than Cputime ? Options
I have been using clock() for calculating CPU time and time() for
calculating Wall time. However, since time() does not provided milli/
microsecond accurancy, i started using gettimeofday() as below to
struct timeval tv1,tv2;
struct timezone tz1, tz2;
double time_start1 = (double) tv1.tv_sec + (double)tv1.tv_usec/
//ALL ALGO PROCESSING HERE
double time_stop1 = (double) tv2.tv_sec + (double)tv2.tv_usec/
LOGGER.info("BackgroundEstimationAlgoT Run WALL_TIME: %0.3f",(double)
(time_stop1 - time_start1));
Certain times, i have been noticing that WALLTIME calculated is lesser
than CPUTIME. I am not sure why ? Double checked the simple code,
nothing seems to be wrong in simple substraction. My understanding was
always WALLTIME(elapsed time) remains higher than CPUTIME(compute
I run my application on head node of a Linux cluster comprising of 24 compute nodes, each with 8 processors.Any pointers as to why is this happening ?
In a multiprocessor environment, CPU time can exceed wallclock time because you have multiple processors and if your process spends enough time running on more than one processor simultaneously, you'll have that effect.
After all, why else would you go multiprocessor, but to have more CPU cycles available per unit of wallclock time?
Yes, your expalanation best explains the difference. But, does this hold good if the application is just single threaded.
I was confused with the exactly opposite results - when i run the same application written in matlab(cputime always lesser than waltime) Vs same application written in c++(95% of runs generated cputime higher than walltime) from the same machine. Maybe, More processing cycles required to process the c++ code as compared to the matlab code.
|All times are GMT -5. The time now is 12:55 AM.|