ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
If you are running your app under Linux, the OOM (Out of Memory) Killer will terminate one or more apps when memory runs low. Most likely the app that is killed will be the one sucking up all the memory, but there's no guarantee.
That just "commits" the memory, very little is actually allocated.
Unless you have unusual settings for "over commit" there is no effect on other processes from this process committing a lot of memory that it doesn't actually allocate.
If you want to actually allocate the memory, you have to write to all of it (write to at least one byte out of every 4096 bytes).
Also, you should be aware that below a certain request size malloc will group allocations together, so several will be committed from memory malloc started with, then a lot will be committed at once when the initial pool runs out and used for the next several requests.
So to see commit and allocation and other aspects all change in sync each second (as I expect you want) you need a large enough request in addition to writing to the memory that malloc gives you. I don't know whether 100000 is "large enough" because that varies by version of malloc. But I suspect it is not large enough.
Also, you should be aware that below a certain request size malloc will group allocations together, so several will be committed from memory malloc started with, then a lot will be committed at once when the initial pool runs out and used for the next several requests.
Do you have a pointer to some info on this effect? The only "size-matters" issue I'm aware of with malloc is a malloc request of <= 64 bytes using the skb() system call, and > 64 bytes using anonymous backed mmap call. I'm interested in the effect you describe (coalescence of memory-block requests). Are these related to the internal "arena" system?
Additionally, I was under the impression that a call to mlockall(MCL_CURRENT | MCL_FUTURE); would skip requiring to tickle pages to force commit (as long as you have CAP_IPC_LOCK permissions a.k.a. root). I could be wrong on this, however.
Do you have a pointer to some info on this effect? The only "size-matters" issue I'm aware of with malloc is a malloc request of <= 64 bytes using the skb() system call, and > 64 bytes using anonymous backed mmap call. I'm interested in the effect you describe (coalescence of memory-block requests). Are these related to the internal "arena" system?
I don't know the amount on which that decision is based, but it is far higher than 64 bytes. It isn't even practical below 4096 bytes and I'm sure it isn't that low. Maybe you meant 64K bytes.
Small requests to malloc typically come from memory previously gotten from the OS by (I forget the name, but you said skb and that is likely correct). If the memory isn't available from previous skb then a new skb will be used to commit far more than the current request.
Large requests to malloc look first in the same pool of previously committed memory, but almost never are satisfied that way. Then they use mmap.
I need to write a small program that eats away at availabe memory. I need to creaty a memory leak to test how other programs cope.
I need to run this program on linux and see if the available memory is decreasing.
So I have done:
Code:
int main()
{
int *buffer;
while(1)
{
buffer = (int*)malloc(100000);
sleep(1);
}
return 0;
}
I run this program in the background and then I use:
Code:
free -m
To see the output of available memory like so:
Code:
total used free shared buffers cached
3293 3168 124 0 237 1205
The problem is that I don't see the 'free' value decrease over time.
Is my approach wrong here??
I have checked that my process is running using ps-ef and it is.
Thanks
Your 'buffer' variable is not used anywhere, it is not in the RHS (Righ Hand Side). It is also of class 'auto', i.e. not global, so it can't be used by, say a separate function. So, the compiler has the right to optimize out the whole
Code:
buffer = (int*)malloc(100000);
line.
...
ntubski made a correct suggestion how to make 'buffer' actually used.
Your 'buffer' variable is not used anywhere, it is not in the RHS (Righ Hand Side). It is also of class 'auto', i.e. not global, so it can't be used by, say a separate function. So, the compiler has the right to optimize out the whole
Code:
buffer = (int*)malloc(100000);
line.
Not correct.
The function call cannot be optimized out even though the lhs variable can.
I don't know the amount on which that decision is based, but it is far higher than 64 bytes. It isn't even practical below 4096 bytes and I'm sure it isn't that low. Maybe you meant 64K bytes.
Small requests to malloc typically come from memory previously gotten from the OS by (I forget the name, but you said skb and that is likely correct). If the memory isn't available from previous skb then a new skb will be used to commit far more than the current request.
Large requests to malloc look first in the same pool of previously committed memory, but almost never are satisfied that way. Then they use mmap.
Don't want to hijack any more.
Firstly, the call is sbrk() / brk(). My bad on that. Too much time in network driver land.
Second, it seems the current thresholds are 128K as min mmap size (this is from my reading of the current malloc in the glibc git repository). Anything below that will fall into an sbrk/brk call. 128*1024 as your malloc size would get you there faster, as johnsfine points out. My understanding of how the internal glibc malloc works is quite outdated.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.