ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm creating a multi-agent program, in which I have to know the exact
cpu usage of my child processes.
in windows there is a function named "getProcessTimes(int pid);",
is there any appropriate linux alternative function that returns the amount
of instructions executed by a process or amount of time used by the child
process, from the process birth until now.
First, keep track of all your children's process ID's by noting what the fork() function returns. Then, for each process ID (567, say), read the file [/B]/proc/567/stat[/B]. Two of the numbers returned are exactly what you want. For information on the numbers you get back, do this at the command prompt:
Hi,
Thank you for reply,
but the thing is, the utime, and stime,
do not change very often. These two change every 2 3 seconds
and n my experiment they increment one step each time.
Also is there any faster way to avoid reading the file each
time I want to know the process times?
Alright , so you need to figure out how much cpu power each of ur child proc's is consuming , hmmm.
not that I'm sure about that but I guess you could eventually calculate that yourself by measuring how much time each process has been taking (between two points in your code , or so)
I'm going to assume that each of your processes runs an infinite loop of some sorts , then you could insert cputick counting functions at the start and end of the iteration (a time counter if ya know what I mean) , then when you know how much time each iteration of each process has taken you probably can find out the cpu usage of each one by dividing the overall cpu usage by the process specific time consumption...etc
It may help if you re-state the problem, so that you can come up with a metric that better answers your question or better assists you in making your decision, whatever it is.
"The CPU load of a process" is actually a fairly abstract number: it depends upon the nature of the machine, the nature of the workload presented by this application, and the "ambient load," which is the workload presented by all the other processes that are running at the same time. As a result, therefore, it's probably not too useful.
A much better metric would be some sort of metering or counter that the process generates for itself, and which is somehow meaningfully related to the work that this process is designed to do. For example, the number of requests completed per-second, or the ratio of the total application workload that was completed by this particular process. A still better metric might consider the totals of the various kinds of wait-time that are being experienced by the requests ... or by some random sample of those requests.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.