[SOLVED] How to _accurately_ measure memory usage?
Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Ubuntu 19.04 on Lenova ThinkPad T440
Posts: 141
Rep:
How to _accurately_ measure memory usage?
Hello all,
We're running an embedded Linux system as follows:
Embedded Linux based on 2.4.29 Kernel
Custom distro (created our own by picking and choosing packages and utils that we needed)
Using ucblib for our libc
Virtual memory - yes
No swap (swapping turned off and no swap partitions)
Heavy use of shared libraries
We're trying to accurately measure memory usage. Specifically, we need to measure the number of unique pages of memory used by our application software.
The problem we're having, I think, is the inaccurate counting of shared library pages. The per process memory calculation performed by tools such as "vmstat", "free" and "top" counts a shared page m times where m is the number of processes that share the page. The result is a memory usage figure that's higher than the actual physical usage. In fact, for each set of m processes that share n pages, the figure is (m x n), which is [m x (n - 1)] pages too high. The correct figure should be simply n.
Instead, we need to count total number of unique pages used. Moreover, we need two measurements eventually. First and foremost, we need to accurately measure the total memory used when the system is running in a "steady state" (of course this would be an average measurement over time). Secondly, we eventually need to measure the amount of memory used per process.
Can anyone point me to system utilities, tools or even some strategy/approach for writing my own application that would yield the correct measurement?
One problem we have is the lack of support of 3rd party tools. We're running a proprietary system on a MIPS processor (not too popular these days). I think we're going to encounter some difficulty getting support--or even getting a version that runs on our platform.
Don't know about a 2.4 kernel - too many years since I used one in anger.
One of the later 2.6 kernels (2.6.14, something like that) introduced smaps. Just what you need.
Perhaps have a look at this. Discusses using pmap - not sure if it is available under 2.4; wasn't on the only system I have access to.
Be aware that the “writeable/private” total cited is the mapped total, not the (private) resident size.
As I said, smaps does this nicely, but not where you are.
Distribution: Ubuntu 19.04 on Lenova ThinkPad T440
Posts: 141
Original Poster
Rep:
Hi,
Thanks for the reply.... Sorry, one question... Could you explain the term "mapped." Is it the processes' private resident size plus the shared lib space?
Distribution: Ubuntu 19.04 on Lenova ThinkPad T440
Posts: 141
Original Poster
Rep:
/proc file system
Hi again,
OK, I was told moving to 2.6 kernel is not an option for the foreseeable future. So... I'm considering the following two options:
Write a shell script to call pmap for each process
Write a C program to access kernel page table and count pages used
In the first approach, I suppose I can write a shell script that pipes ps output to a toolchain that counts each shared lib once. Then, it can count the private memory used for each process and sum the result.
In the second approach, I'm wondering if I can write a C program that counts the total memory pages used in use. This would not give me per process memory usage figures, but it would give em an accurate number of total memory usage. Can anyone get me started on how to do this? Does this require accessing the /proc fiel system or some other part of the kernel? Can anyone point me to docs kernel API docs or anything?
Distribution: Ubuntu 19.04 on Lenova ThinkPad T440
Posts: 141
Original Poster
Rep:
How to interpret pmap output
I've been playing with pmap. That seems to be my best option given that I can't go to a Linux 2.6 kernel.
Can anyone tell me how I should interpret the lines with [ anon ] and [ stack ] ? I am guessing that the [ stack ] line is the actual process stack? Is that correct?
What about [ anon ] ? What is it; how should I count it?
Add it all in - that big [anon] will be the heap.
Personally I would just run pmap -d and parse out the total line at the bottom, as the referenced article suggested. The writable/private probably is what you are looking for. I maybe read too much into your requirements - I thought you were looking for (true) RSS rather than (true) private (mapped was a poor choice of terminology - think virtual). If your app has malloc'd storage but not referenced it I guess you still want to count it - use the total, and save yourself some grief.
The maths will still be out, but only marginally - for the data segments for shared libs that should probably be booked to you.
Distribution: Ubuntu 19.04 on Lenova ThinkPad T440
Posts: 141
Original Poster
Rep:
Thanks again for your reply syg00. Actually, I think the writable/private totals are wrong, are they not? I understood the article to say that the totals double count the shared libs.
I wrote a shell script to take output from "ps -ef" and then, for each PID, run "pmap -d" on it. I then run it through sed and sort. I noticed the addresses are all unique. Outputting unique lines from sort, I count the shared lib segments only once, and each process's private code and data space once. All the "anon"s are counted once also. I think this is what I need.
I do have a few questions however. :-)
I noticed that for a lot of the processes owned by root, I get output like the following when I run pmap on its PID:
Does this mean that pmap can't "see" the main memory used by these processes? Is this kernel memory? I'm guessing here.
How can I tell if memory allocated/used by the kernel is accounted for?
How can I tell how much memory the kernel is using?
So now I need to validate my results. I wrote a simple C program to continually malloc() memory (e.g. 1 Mbyte at a time) until failure. I then total the amount of memory successfully malloc()'ed. If I add this number to my shell script output (that summed the figures reported by pmap for all processes), I should get the total memory "seen" by the OS (kernel). Correct or not?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.