Memory usage increase without leaks
I am experiencing some continuous memory increasing problem without any memory leak. I am saying that there is no memory leak, because
• I have checked with Purify • And confirm that there is no any Intentional memory leak that cannot find with tools like Purify My binary is 32 bit version and it is running in a 64 bit machine and uses the top command to check the memory usage. My program is written in c++ and it is bit difficult to describe the exact logic here, but basically what it does when consider the memory allocation side is, 1) Create some data structures by allocating small-small size memory blocks (by inserting some data). Total allocation will be around 3Gb 2) Again insert the same data (repeat the step 1) When insert the same data set in the second time, it should not increase the total allocated memory (that is how it was designed). But unfortunately it is increasing in my case. Seems like, this increase has some pattern. Second insert will increase the memory usage by a big amount, from next increase the memory increment is less. Is this due to memory fragmentation, or due to running the 32bit binary in 64bit environment, or some kind of issue with top command output? Does anyone have any suggestion about why this kind of thing happened..? |
To get useful help, you need to be clearer about what you are measuring and probably also about what you are testing.
Quote:
Quote:
Quote:
Quote:
Quote:
|
The memory allocator can basically do whatever it wants. Its behavior might be unpredictable, and might produce misleading statistics.
A good strategy might be to re-code the program to use a free list. When you no longer need a memory-block (at the end of step #1), knowing that you will very soon again need that memory block (in step #2), don't bother to "free()" it. Instead, simply attach it to a singly-linked list. The program's memory allocation logic then becomes to first check the list, removing the first block from it if it isn't empty. If you know that you'll be allocating blocks of the same size, consider allocating blocks in large multiples of that size, then split it up. When the program ends, it can simply end, knowing that all of its memory will be cleaned-up anyway. |
Quote:
All the behavior is very predictable, stable and conservative until the pool of free memory inside the process runs out and more must be requested from the OS. Then the amount more it requests (the bump in the VIRT size of the process) depends on more factors so it may seem unpredictable. And also then, the memory is just committed, not used, so the stats (including VIRT) are misleading. But other than stats, that is not a serious problem, because committing memory without using it in Linux generally has near zero cost. Quote:
Quote:
If you have a good understanding of the memory request behavior of your program, then with a significant amount of effort you can use a slab allocator to reduce the overhead of memory management. If a 32 bit program is making enough small allocation requests to total near 3GB, that is some indication that a slab allocator could help. But a slab allocator would just reduce the overhead and maybe keep the program fitting in 32 bit address space longer. If there are symptoms of a memory leak, the slab allocator (or free lists or whatever) doesn't address that. Either find and fix the memory leak, or understand the statistics that created the appearance of a memory leak well enough to realize if that appearance is false. |
I agree entirely with what John says ... and BTW the two observations are not, in fact, conflicting.
As I|we said, there are many factors which affect memory handling and just as many factors which affect ("lies, damn lies, and...") statistics. :) There are likewise strategies that can be considered in specialized situations (such as the "free list" strategy), and because of the OP's description of exactly how this program is supposed to work, the judgment might be made to apply one of those strategies here. (It goes without saying that the GNU memory allocation system is superlatively designed ... and I suggest not otherwise.) But the first thing, always, is ... to thoroughly understand the program, and to explore very, very carefully the (very probable...) theory that it holds "yet one more" insidiously clever bug. |
Hi John,
Thanks for your reply and I have few thing to know. There are many kinds of memory leaks that Purify can't find. can you please give some examples..? top shows many values. Which are you looking at that you call "memory usage"? I am using the VIRT value for statistics That is a very unclear statement. I can explain this with example: Assume you inserting a data set called X, and the process consumed 10Mb Then you insert the same data set (X), now the memory consumption for the process should be stays as 10Mb. |
Hi John,
According to you this could be due to memory fragmentation.. If it is so, Is there a way to investigate this (from the OS side)...? ex: analyzing /proc/buddyinfo or checking pmap output... |
Hi all,
I was able to find the reason for this memory growth. It is exactly an intentional leak (keep unwanted memory and release them at the shutdown time). Basically this is due to not clearing a std::list. So my advice for others, DO NOT assume there are problems with OS. ex: wrong outputs from top command or memory growth due to fragmentation Just do this kind of assumptions if you have pretty good proofs. Because in my case, it is using lots of small memory allocations (more than couple of millions), but still there is no memory growth due to fragmentation. So the assumptions I have made at the beginning of this post is completely wrong. |
All times are GMT -5. The time now is 09:21 PM. |