Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
04-22-2015, 10:55 PM
|
#1
|
Member
Registered: Oct 2010
Posts: 47
Rep:
|
How to measure io latency?
Hi, everyone.
Recently, I have to measure the performance of my program, and have discovered that there is some performance bottle neck in it. My program is really io intensive, and I want to measure the latency of my IO operations to determine if this is where the bottle neck is.
My program uses libaio for I/O.
I measured the latency by calling the "times" function when before calling the "io_submit" and after calling the "io_getevents", and then compute the difference. But I wonder if this would lead to an unacceptable result because of the time consumed by "times" itself. And if so, how should I measure this latency?
Thanks:-)
Last edited by rainman1985_2010; 04-22-2015 at 11:13 PM.
|
|
|
04-23-2015, 08:13 AM
|
#2
|
Senior Member
Registered: May 2004
Location: In the DC 'burbs
Distribution: Arch, Scientific Linux, Debian, Ubuntu
Posts: 4,290
|
How about running your program under a profiler like gprof? I haven't used it in awhile, but I suspect it would do the job. The profiler will give you a rough idea where your code is spending most of its line.
I you can categorize the I/O pattern of your program and want to get a system-level baseline for how well I/O is performing, you can use fio to benchmark your system's I/O performance. It has a mode that uses libaio, so the results ought to be pretty comparable. It reports bandwidth, latency, and CPU utilization during its run.
|
|
|
04-24-2015, 01:16 PM
|
#3
|
Moderator
Registered: Mar 2011
Location: USA
Distribution: MINT Debian, Angstrom, SUSE, Ubuntu, Debian
Posts: 9,914
|
Quote:
Originally Posted by rainman1985_2010
I measured the latency by calling the "times" function when before calling the "io_submit" and after calling the "io_getevents", and then compute the difference. But I wonder if this would lead to an unacceptable result because of the time consumed by "times" itself. And if so, how should I measure this latency?
|
I usually do something similar to exactly what you've done. I use the function which gets me seconds and milliseconds since the epoch and create logs. It's not uncommon to see a macro in my code that looks like a printf, but it's a "time logger" and the print strings are like "<1>\n" "<1a>\n" "<1b>\n" because I go from 1 to N and then find a low performing area, leave the remainder of my logs in there and then interpolate more granularity within the sub-area where I found too much time being taken. I make these macros conditional so that I can add them in or out depending on whether or not I need to re-check my performance.
I don't worry about the overhead in the prints or calls to time, it's worth more to determine where my problem areas are. The more main point here is I start granular and then fine tune. I.e. I get my whole initialization or data path via the 1, 2, 3, ... or A, B, C, ... method and when I find a particular area where it took a long time, I then drill deeper.
Also consider that if you add a timestamp just before loop logic (A) and then just after loop logic (A) and likely add one just before and after loop logic (B) then I've impacted the entry and exit for each of those sections similarly. And usually the exit timestamp for section (A) is the entry timestamp for section (B).
|
|
|
All times are GMT -5. The time now is 03:35 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|