Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am using 'time' command to see how long a process takes to complete. The command displays 'real', 'user', and 'sys'. The value of 'real' is always greater than the sum 'user' and 'sys'. My question is how can the difference can be explained. In the man page of 'time' under "accuracy", it's said that:
"The elapsed time is not collected atomically with the execution of
the program; as a result, in bizarre circumstances (if the `time'
command gets stopped or swapped out in between when the program being
timed exits and when `time' calculates how long it took to run), it
could be much larger than the actual execution time."
This kinda explains but it's said to happen in only 'bizarre' circumstances. then how do you account for the difference under 'normal' circumstances since 'real' is ALWAYS greater than the sum of the other two?
Let me see if I come to a correct conclusion from your provided facts.
1 cpu second is still 1 real second and real is clock time and user/sys are cpu time. real is the time for the process being timed to enter and exit but during which there are other higher priorty, may or may not be related, processes that run and real account for them in its value. user/sys are the exact time consumed by the CPU to carry out the process being timed.
Is this along the line of what's going on with 'time' command?
Very close. It's more like
cpu = time spent in cpu
user = time spent in user mode (ie not inc kernel work)
time = how long it took between pressing enter and completion, which, as you pointed out also includes time spent swapped out whilst other progs run.
you can also try using the cmd
\time prog
as this is a GNU variation that gives more info.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.