ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I writing a packet processor that takes one packet from 1 to n distinct streams sequentially, and processes them one at a time. How can I *accurately* measure the packet processing delay for each individual packet?
Each packet stream is continuous, but they are not synchronous, i.e. for each stream, the arrival time of the first bit of each packet is different.
For example, when processing the first packet, I would like to know how much time is spent with the actually processing of that packet....So I don't want to include any time when the process gets swapped out etc. For this reason, I don't want to simply get the system time before and after the function.
I hope my question is clear, but please ask me to clarify if it isn't.
Try using the "GNU Profiler" (command is "gprof"). Its' made especially for these kind of things. I've no experience using it, but I think it the best (only?) way to do this. Not including the time when a process gets swapped out, may be difficult or impossible, but I'm not sure about this.
See "man gprof". Also the free-downloadable book "Advanced Linux Programming" (New Riders), Appendix A3 has an introduction to gprof.
You can download the book, chapter by chapter here:
I thought of using the profiler, except that I would like the information at run-time...I'm not just trying to optimize my code.
I came up with this method:
I can call my function as a separate process instead. Then (somehow) have that process access its it's own processor descriptor and get access to the per_cpu_utime and per_cpu_stime fields, which are the number of ticks the process is running in user and kernel mode. Does anyone think this will work?
Also, I think what i'm doing is similar to the linux time function...but I can't seem to locate the source code for that.
Originally posted by rasselin Also, I think what i'm doing is similar to the linux time function...but I can't seem to locate the source code for that.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.