LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software > Linux - Kernel
User Name
Password
Linux - Kernel This forum is for all discussion relating to the Linux kernel.

Notices

Reply
 
Search this Thread
Old 04-20-2010, 05:08 AM   #1
rpjanaka
LQ Newbie
 
Registered: Jun 2007
Posts: 4

Rep: Reputation: 0
Memory usage increase without leaks


I am experiencing some continuous memory increasing problem without any memory leak. I am saying that there is no memory leak, because
I have checked with Purify
And confirm that there is no any Intentional memory leak that cannot find with tools like Purify

My binary is 32 bit version and it is running in a 64 bit machine and uses the top command to check the memory usage.
My program is written in c++ and it is bit difficult to describe the exact logic here, but basically what it does when consider the memory allocation side is,
1) Create some data structures by allocating small-small size memory blocks (by inserting some data). Total allocation will be around 3Gb
2) Again insert the same data (repeat the step 1)

When insert the same data set in the second time, it should not increase the total allocated memory (that is how it was designed). But unfortunately it is increasing in my case.

Seems like, this increase has some pattern. Second insert will increase the memory usage by a big amount, from next increase the memory increment is less.

Is this due to memory fragmentation, or due to running the 32bit binary in 64bit environment, or some kind of issue with top command output? Does anyone have any suggestion about why this kind of thing happened..?
 
Old 04-21-2010, 02:13 PM   #2
johnsfine
Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,064

Rep: Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106
To get useful help, you need to be clearer about what you are measuring and probably also about what you are testing.

Quote:
Originally Posted by rpjanaka View Post
And confirm that there is no any Intentional memory leak that cannot find with tools like Purify
There are many kinds of memory leaks that Purify can't find.

Quote:
uses the top command to check the memory usage.
top shows many values. Which are you looking at that you call "memory usage"? The VIRT size of the process you are testing might be a decent measure. Anything else, especially the system wide Mem: used value is not a meaningful measure of what you seem to think you are measuring.

Quote:
When insert the same data set in the second time, it should not increase the total allocated memory (that is how it was designed).
That is a very unclear statement.

Quote:
Is this due to memory fragmentation
Maybe.

Quote:
or due to running the 32bit binary in 64bit environment
Running the 32 bit binary under 64bit kernel lets it use 4GB instead of 3GB of virtual address space. That may cause the VIRT value in top to be higher than it would be under a 32 bit kernel. In that case either the VIRT value is not a good measure of memory use or the program would have failed under a 32 bit kernel. If it works in both, it won't use more memory under a 64 bit kernel than under a 32 bit one. It just might commit more memory.

Last edited by johnsfine; 04-21-2010 at 02:15 PM.
 
Old 04-21-2010, 10:55 PM   #3
sundialsvcs
Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 5,330

Rep: Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100
The memory allocator can basically do whatever it wants. Its behavior might be unpredictable, and might produce misleading statistics.

A good strategy might be to re-code the program to use a free list. When you no longer need a memory-block (at the end of step #1), knowing that you will very soon again need that memory block (in step #2), don't bother to "free()" it. Instead, simply attach it to a singly-linked list.

The program's memory allocation logic then becomes to first check the list, removing the first block from it if it isn't empty.

If you know that you'll be allocating blocks of the same size, consider allocating blocks in large multiples of that size, then split it up.

When the program ends, it can simply end, knowing that all of its memory will be cleaned-up anyway.

Last edited by sundialsvcs; 04-21-2010 at 10:57 PM.
 
Old 04-22-2010, 06:28 AM   #4
johnsfine
Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,064

Rep: Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106Reputation: 1106
Quote:
Originally Posted by sundialsvcs View Post
The memory allocator can basically do whatever it wants. Its behavior might be unpredictable, and might produce misleading statistics.
But the versions of malloc in general use in GNU libraries are very good functionally at reusing free blocks of the right size if they exist.

All the behavior is very predictable, stable and conservative until the pool of free memory inside the process runs out and more must be requested from the OS. Then the amount more it requests (the bump in the VIRT size of the process) depends on more factors so it may seem unpredictable. And also then, the memory is just committed, not used, so the stats (including VIRT) are misleading. But other than stats, that is not a serious problem, because committing memory without using it in Linux generally has near zero cost.


Quote:
A good strategy might be to re-code the program to use a free list. When you no longer need a memory-block (at the end of step #1), knowing that you will very soon again need that memory block (in step #2), don't bother to "free()" it. Instead, simply attach it to a singly-linked list.
If that would have been effective, then the ability of malloc to use the right size if available would also be effective. Such a free list might significantly reduce the CPU time spent allocating and freeing memory. Depending on allocation/use patterns, it might reduce cache misses. But there are very few situations in which it could predictably reduce memory fragmentation. Typically the balance of memory use inside vs. outside such a pool shifts enough during the run of the program that having the pool increases memory fragmentation.

Quote:
Originally Posted by sundialsvcs View Post
If you know that you'll be allocating blocks of the same size, consider allocating blocks in large multiples of that size, then split it up.
Meaning something similar to the "slab allocator" used inside the Linux kernel.

If you have a good understanding of the memory request behavior of your program, then with a significant amount of effort you can use a slab allocator to reduce the overhead of memory management. If a 32 bit program is making enough small allocation requests to total near 3GB, that is some indication that a slab allocator could help.

But a slab allocator would just reduce the overhead and maybe keep the program fitting in 32 bit address space longer.

If there are symptoms of a memory leak, the slab allocator (or free lists or whatever) doesn't address that. Either find and fix the memory leak, or understand the statistics that created the appearance of a memory leak well enough to realize if that appearance is false.

Last edited by johnsfine; 04-22-2010 at 06:47 AM.
 
Old 04-23-2010, 10:06 PM   #5
sundialsvcs
Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 5,330

Rep: Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100Reputation: 1100
I agree entirely with what John says ... and BTW the two observations are not, in fact, conflicting.

As I|we said, there are many factors which affect memory handling and just as many factors which affect ("lies, damn lies, and...") statistics. There are likewise strategies that can be considered in specialized situations (such as the "free list" strategy), and because of the OP's description of exactly how this program is supposed to work, the judgment might be made to apply one of those strategies here.

(It goes without saying that the GNU memory allocation system is superlatively designed ... and I suggest not otherwise.)

But the first thing, always, is ... to thoroughly understand the program, and to explore very, very carefully the (very probable...) theory that it holds "yet one more" insidiously clever bug.

Last edited by sundialsvcs; 04-23-2010 at 10:08 PM.
 
Old 04-26-2010, 08:35 AM   #6
rpjanaka
LQ Newbie
 
Registered: Jun 2007
Posts: 4

Original Poster
Rep: Reputation: 0
Hi John,
Thanks for your reply and I have few thing to know.

There are many kinds of memory leaks that Purify can't find.
can you please give some examples..?

top shows many values. Which are you looking at that you call "memory usage"?
I am using the VIRT value for statistics

That is a very unclear statement.
I can explain this with example:
Assume you inserting a data set called X, and the process consumed 10Mb
Then you insert the same data set (X), now the memory consumption for the process should be stays as 10Mb.
 
Old 05-03-2010, 06:55 AM   #7
rpjanaka
LQ Newbie
 
Registered: Jun 2007
Posts: 4

Original Poster
Rep: Reputation: 0
Hi John,
According to you this could be due to memory fragmentation.. If it is so, Is there a way to investigate this (from the OS side)...?
ex: analyzing /proc/buddyinfo or checking pmap output...
 
Old 07-01-2010, 06:12 AM   #8
rpjanaka
LQ Newbie
 
Registered: Jun 2007
Posts: 4

Original Poster
Rep: Reputation: 0
Hi all,

I was able to find the reason for this memory growth. It is exactly an intentional leak (keep unwanted memory and release them at the shutdown time). Basically this is due to not clearing a std::list.
So my advice for others, DO NOT assume there are problems with OS.
ex: wrong outputs from top command or memory growth due to fragmentation

Just do this kind of assumptions if you have pretty good proofs. Because in my case, it is using lots of small memory allocations (more than couple of millions), but still there is no memory growth due to fragmentation. So the assumptions I have made at the beginning of this post is completely wrong.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Memory leaks mailsrinu28 Programming 4 08-07-2007 07:15 PM
frustrating memory leaks xushi Slackware 18 08-10-2005 06:13 AM
how to detect memory leaks abirami Linux - Networking 2 11-08-2004 05:35 AM
how to determine cpu usage, memory usage, I/O usage by a particular user logged on li rags2k Programming 4 08-21-2004 04:45 AM
Memory Leaks? stampede96 Linux - Software 3 02-20-2003 12:52 PM


All times are GMT -5. The time now is 05:08 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration