How to _accurately_ measure memory usage?
Hello all,
We're running an embedded Linux system as follows:
We're trying to accurately measure memory usage. Specifically, we need to measure the number of unique pages of memory used by our application software. The problem we're having, I think, is the inaccurate counting of shared library pages. The per process memory calculation performed by tools such as "vmstat", "free" and "top" counts a shared page m times where m is the number of processes that share the page. The result is a memory usage figure that's higher than the actual physical usage. In fact, for each set of m processes that share n pages, the figure is (m x n), which is [m x (n - 1)] pages too high. The correct figure should be simply n. Instead, we need to count total number of unique pages used. Moreover, we need two measurements eventually. First and foremost, we need to accurately measure the total memory used when the system is running in a "steady state" (of course this would be an average measurement over time). Secondly, we eventually need to measure the amount of memory used per process. Can anyone point me to system utilities, tools or even some strategy/approach for writing my own application that would yield the correct measurement? One problem we have is the lack of support of 3rd party tools. We're running a proprietary system on a MIPS processor (not too popular these days). I think we're going to encounter some difficulty getting support--or even getting a version that runs on our platform. Many thanks for any suggestions or leads. Vartan |
Don't know about a 2.4 kernel - too many years since I used one in anger.
One of the later 2.6 kernels (2.6.14, something like that) introduced smaps. Just what you need. |
What about the /proc file system? There is a precise list of all used memory, is not it?
|
Perhaps have a look at this. Discusses using pmap - not sure if it is available under 2.4; wasn't on the only system I have access to.
Be aware that the “writeable/private” total cited is the mapped total, not the (private) resident size. As I said, smaps does this nicely, but not where you are. |
Have a look at memstat
|
Hi,
Thanks for the reply.... Sorry, one question... Could you explain the term "mapped." Is it the processes' private resident size plus the shared lib space? Thanks. |
memstat source code?
Hi all,
Thanks for the replies. Can anyone point me to memstat source code? I followed the posted link and did a Google search but had no luck... several dead links. |
/proc file system
Hi again,
OK, I was told moving to 2.6 kernel is not an option for the foreseeable future. So... I'm considering the following two options:
In the first approach, I suppose I can write a shell script that pipes ps output to a toolchain that counts each shared lib once. Then, it can count the private memory used for each process and sum the result. In the second approach, I'm wondering if I can write a C program that counts the total memory pages used in use. This would not give me per process memory usage figures, but it would give em an accurate number of total memory usage. Can anyone get me started on how to do this? Does this require accessing the /proc fiel system or some other part of the kernel? Can anyone point me to docs kernel API docs or anything? Thanks again. |
How to interpret pmap output
I've been playing with pmap. That seems to be my best option given that I can't go to a Linux 2.6 kernel.
Can anyone tell me how I should interpret the lines with [ anon ] and [ stack ] ? I am guessing that the [ stack ] line is the actual process stack? Is that correct? What about [ anon ] ? What is it; how should I count it? For example, here's some out put from my system: Code:
# pmap -d 3789 Thanks again. |
Add it all in - that big [anon] will be the heap.
Personally I would just run pmap -d and parse out the total line at the bottom, as the referenced article suggested. The writable/private probably is what you are looking for. I maybe read too much into your requirements - I thought you were looking for (true) RSS rather than (true) private (mapped was a poor choice of terminology - think virtual). If your app has malloc'd storage but not referenced it I guess you still want to count it - use the total, and save yourself some grief. The maths will still be out, but only marginally - for the data segments for shared libs that should probably be booked to you. |
Thanks again for your reply syg00. Actually, I think the writable/private totals are wrong, are they not? I understood the article to say that the totals double count the shared libs.
I wrote a shell script to take output from "ps -ef" and then, for each PID, run "pmap -d" on it. I then run it through sed and sort. I noticed the addresses are all unique. Outputting unique lines from sort, I count the shared lib segments only once, and each process's private code and data space once. All the "anon"s are counted once also. I think this is what I need. I do have a few questions however. :-) I noticed that for a lot of the processes owned by root, I get output like the following when I run pmap on its PID: Code:
# ps -ef | more How can I tell if memory allocated/used by the kernel is accounted for? How can I tell how much memory the kernel is using? So now I need to validate my results. I wrote a simple C program to continually malloc() memory (e.g. 1 Mbyte at a time) until failure. I then total the amount of memory successfully malloc()'ed. If I add this number to my shell script output (that summed the figures reported by pmap for all processes), I should get the total memory "seen" by the OS (kernel). Correct or not? Many thanks again for your replies and patience! |
All times are GMT -5. The time now is 03:27 AM. |