ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I wrote this little program to try and make the system run out of memory - it doesn't seem to work, because my system is still 100% usable and 'ps aux' only shows memory usage as 0.1% for the process.
Code:
/* keep calling realloc() until it runs out of memory */
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main()
{
char *ptr;
size_t len;
len = 1;
ptr = (char*) malloc(len);
while(len *= 2)
{
if((ptr = (char*) realloc(ptr, len)) == NULL)
fprintf(stderr, "realloc failed trying to allocate %d bytes\n", len);
if(len > 67108865) /* 64MB */
{
fprintf(stdout, "going to sleep\n");
sleep(45);
}
}
return 0;
}
Before I added the 64MB check, realloc seemed to fail when integer overflow happened, not when the system ran out of memory.
Basically I'm spending time worrying about this for my little shell program. I'm starting to wonder whether its worth it to check whether realloc failed when I try to allocate size for a line of any length. I'm starting to really doubt it...
I initially malloc LINE_MAX number of bytes, and if thats not enough I call realloc when necessary.
"I wrote this little program to try and make the system run out of memory - it doesn't seem to work, because my system is still 100% usable and 'ps aux' only shows memory usage as 0.1% for the process."
You can make the problem a little easier by getting rid of swap during the experiment. Stop swap with:
swapoff
I haven't actually tried this, but I think using setrlimit to reduce RLIMIT_DATA would help. That would tell the kernel not to give your process so much heap memory.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Allocating memory is not enough, it reserves pages of virtual memory but does nothing else.
If you want to really simulate a memory shortage, you need to make each of these pages mapped to physical memory, the simpler would be to read or write one byte per page.
"Unfortunately", it depends on the malloc implementation. From the malloc manpage:
Quote:
By default, Linux follows an optimistic memory allocation strategy.
This means that when malloc() returns non-NULL there is no guarantee
that the memory really is available. This is a really bad bug. In case
it turns out that the system is out of memory, one or more processes
will be killed by the infamous OOM killer. In case Linux is employed
under circumstances where it would be less desirable to suddenly lose
some randomly picked processes, and moreover the kernel version is suf-
ficiently recent, one can switch off this overcommitting behavior using
a command like
# echo 2 > /proc/sys/vm/overcommit_memory
See also the kernel Documentation directory, files vm/overcommit-
accounting and sysctl/vm.txt.
There are malloc implementations out there that use mmap(2) to get file-backed memory in /tmp. It's hard to make robust implementations fail, but this OOM killer is something I turn off. Try with RLIMIT_DATA as rsheridan6 pointed out. Try setting malloc() options too. If you're trying to make your program robust, then check NULLs and -1's. Installing a signal handler for SIGSEGV doesn't make sense. Note that not every malloc implementation sets errno. I usually create a wrapper like this:
Then either set memory_allowed to some static value with a #define or come up with some fancy-schmancy way of real-time modification to simulate requests made by other processes.
EDIT:
Didn't mean to rehash what primo said... I think we were both headed in the same direction.
Last edited by Dark_Helmet; 10-22-2005 at 06:10 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.