LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Kernel (https://www.linuxquestions.org/questions/linux-kernel-70/)
-   -   Memory usage increase without leaks (https://www.linuxquestions.org/questions/linux-kernel-70/memory-usage-increase-without-leaks-803024/)

rpjanaka 04-20-2010 05:08 AM

Memory usage increase without leaks
 
I am experiencing some continuous memory increasing problem without any memory leak. I am saying that there is no memory leak, because
• I have checked with Purify
• And confirm that there is no any Intentional memory leak that cannot find with tools like Purify

My binary is 32 bit version and it is running in a 64 bit machine and uses the top command to check the memory usage.
My program is written in c++ and it is bit difficult to describe the exact logic here, but basically what it does when consider the memory allocation side is,
1) Create some data structures by allocating small-small size memory blocks (by inserting some data). Total allocation will be around 3Gb
2) Again insert the same data (repeat the step 1)

When insert the same data set in the second time, it should not increase the total allocated memory (that is how it was designed). But unfortunately it is increasing in my case.

Seems like, this increase has some pattern. Second insert will increase the memory usage by a big amount, from next increase the memory increment is less.

Is this due to memory fragmentation, or due to running the 32bit binary in 64bit environment, or some kind of issue with top command output? Does anyone have any suggestion about why this kind of thing happened..?

johnsfine 04-21-2010 02:13 PM

To get useful help, you need to be clearer about what you are measuring and probably also about what you are testing.

Quote:

Originally Posted by rpjanaka (Post 3941270)
And confirm that there is no any Intentional memory leak that cannot find with tools like Purify

There are many kinds of memory leaks that Purify can't find.

Quote:

uses the top command to check the memory usage.
top shows many values. Which are you looking at that you call "memory usage"? The VIRT size of the process you are testing might be a decent measure. Anything else, especially the system wide Mem: used value is not a meaningful measure of what you seem to think you are measuring.

Quote:

When insert the same data set in the second time, it should not increase the total allocated memory (that is how it was designed).
That is a very unclear statement.

Quote:

Is this due to memory fragmentation
Maybe.

Quote:

or due to running the 32bit binary in 64bit environment
Running the 32 bit binary under 64bit kernel lets it use 4GB instead of 3GB of virtual address space. That may cause the VIRT value in top to be higher than it would be under a 32 bit kernel. In that case either the VIRT value is not a good measure of memory use or the program would have failed under a 32 bit kernel. If it works in both, it won't use more memory under a 64 bit kernel than under a 32 bit one. It just might commit more memory.

sundialsvcs 04-21-2010 10:55 PM

The memory allocator can basically do whatever it wants. Its behavior might be unpredictable, and might produce misleading statistics.

A good strategy might be to re-code the program to use a free list. When you no longer need a memory-block (at the end of step #1), knowing that you will very soon again need that memory block (in step #2), don't bother to "free()" it. Instead, simply attach it to a singly-linked list.

The program's memory allocation logic then becomes to first check the list, removing the first block from it if it isn't empty.

If you know that you'll be allocating blocks of the same size, consider allocating blocks in large multiples of that size, then split it up.

When the program ends, it can simply end, knowing that all of its memory will be cleaned-up anyway.

johnsfine 04-22-2010 06:28 AM

Quote:

Originally Posted by sundialsvcs (Post 3943496)
The memory allocator can basically do whatever it wants. Its behavior might be unpredictable, and might produce misleading statistics.

But the versions of malloc in general use in GNU libraries are very good functionally at reusing free blocks of the right size if they exist.

All the behavior is very predictable, stable and conservative until the pool of free memory inside the process runs out and more must be requested from the OS. Then the amount more it requests (the bump in the VIRT size of the process) depends on more factors so it may seem unpredictable. And also then, the memory is just committed, not used, so the stats (including VIRT) are misleading. But other than stats, that is not a serious problem, because committing memory without using it in Linux generally has near zero cost.


Quote:

A good strategy might be to re-code the program to use a free list. When you no longer need a memory-block (at the end of step #1), knowing that you will very soon again need that memory block (in step #2), don't bother to "free()" it. Instead, simply attach it to a singly-linked list.
If that would have been effective, then the ability of malloc to use the right size if available would also be effective. Such a free list might significantly reduce the CPU time spent allocating and freeing memory. Depending on allocation/use patterns, it might reduce cache misses. But there are very few situations in which it could predictably reduce memory fragmentation. Typically the balance of memory use inside vs. outside such a pool shifts enough during the run of the program that having the pool increases memory fragmentation.

Quote:

Originally Posted by sundialsvcs (Post 3943496)
If you know that you'll be allocating blocks of the same size, consider allocating blocks in large multiples of that size, then split it up.

Meaning something similar to the "slab allocator" used inside the Linux kernel.

If you have a good understanding of the memory request behavior of your program, then with a significant amount of effort you can use a slab allocator to reduce the overhead of memory management. If a 32 bit program is making enough small allocation requests to total near 3GB, that is some indication that a slab allocator could help.

But a slab allocator would just reduce the overhead and maybe keep the program fitting in 32 bit address space longer.

If there are symptoms of a memory leak, the slab allocator (or free lists or whatever) doesn't address that. Either find and fix the memory leak, or understand the statistics that created the appearance of a memory leak well enough to realize if that appearance is false.

sundialsvcs 04-23-2010 10:06 PM

I agree entirely with what John says ... and BTW the two observations are not, in fact, conflicting.

As I|we said, there are many factors which affect memory handling and just as many factors which affect ("lies, damn lies, and...") statistics. :) There are likewise strategies that can be considered in specialized situations (such as the "free list" strategy), and because of the OP's description of exactly how this program is supposed to work, the judgment might be made to apply one of those strategies here.

(It goes without saying that the GNU memory allocation system is superlatively designed ... and I suggest not otherwise.)

But the first thing, always, is ... to thoroughly understand the program, and to explore very, very carefully the (very probable...) theory that it holds "yet one more" insidiously clever bug.

rpjanaka 04-26-2010 08:35 AM

Hi John,
Thanks for your reply and I have few thing to know.

There are many kinds of memory leaks that Purify can't find.
can you please give some examples..?

top shows many values. Which are you looking at that you call "memory usage"?
I am using the VIRT value for statistics

That is a very unclear statement.
I can explain this with example:
Assume you inserting a data set called X, and the process consumed 10Mb
Then you insert the same data set (X), now the memory consumption for the process should be stays as 10Mb.

rpjanaka 05-03-2010 06:55 AM

Hi John,
According to you this could be due to memory fragmentation.. If it is so, Is there a way to investigate this (from the OS side)...?
ex: analyzing /proc/buddyinfo or checking pmap output...

rpjanaka 07-01-2010 06:12 AM

Hi all,

I was able to find the reason for this memory growth. It is exactly an intentional leak (keep unwanted memory and release them at the shutdown time). Basically this is due to not clearing a std::list.
So my advice for others, DO NOT assume there are problems with OS.
ex: wrong outputs from top command or memory growth due to fragmentation

Just do this kind of assumptions if you have pretty good proofs. Because in my case, it is using lots of small memory allocations (more than couple of millions), but still there is no memory growth due to fragmentation. So the assumptions I have made at the beginning of this post is completely wrong.


All times are GMT -5. The time now is 09:21 PM.