ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I wrote a gcc program and I am running it on a unix server that has 8 cpus. The program is a simulation and I have to run it for different sets of input parameters. I run the program 6 times simultaneously with 6 different sets of input parameters. When I do this the programs crash. However when I run only 3 of them simultaneously, and then run the other 3, they all finish successfully. Below is the output of the top command when only one of my programs is running.
I was told that the committed memory is the memory my program has asked for. The committed memory may be used or not. At this time my program has approximately 8809MB committed memory and approximately 1.5GB used committed memory. My programs will crash if they try to use more memory than the RAM+swap the system has. They will also crash if they try to commit a very large amount of memory. Can you tell me how much memory my programs are allowed to commit?
Anyone else joining this discussion may want to know that it moved here from a GCC mailing list (which was very much the wrong place for it). For background info, here is a link to an interesting point in the middle of that previous discussion: http://gcc.gnu.org/ml/gcc-help/2010-03/msg00069.html
To get some basic info on the current state, post the output from the following commands
Code:
grep Commit /proc/meminfo
/sbin/sysctl vm | grep commit
That CommitLimit was computed by Linux based on my 28GB of swap and 7GB of ram.
The Committed_AS value is the (small at the moment) total amount committed for all the processes on this system.
The vm.overcommit_memory is the mode Linux is in for deciding how much to over commit memory (allow the Committed_AS value to exceed the CommitLimit). 0 is the default mode and the most complicated. I did a few searches on it just now for you and I don't quite understand its rules nor what else affects them.
One thing you could do is switch to root and give the command
Code:
/sbin/sysctl vm.overcommit_memory=1
That changes the rule for over committing memory so Linux will grant any memory request regardless of how much memory is committed. If more memory is actually used than can be used (most of ram plus all of swap) then the "out of memory killer" will select some process and kill it to reduce memory use. But with vm.overcommit_memory=1 no processes will crash simply from requesting a large chunk of memory that they won't actually use (as your top output seems to say your processes do).
In the previous discussion, you never answered my question about the top output showing one of your processes with about 1.5GB actually used. Do you have good reason to believe that top was run when that process was near its maximum memory use?
Based on the info you have provided, I think it is very likely your attempt to run six of those crashed due to the over commit and not due to actual memory use. As I described above, you could completely stop the over commit crash from happening.
But that does not guarantee your six processes will run correctly to completion. Maybe they use more than 1.5GB each sometime later in their processing.
If the six processes use more than your 10GB of physical ram, they might slow down due to swapping, but probably not. Most likely some significant fraction of their memory use at any moment is stale and won't noticeably affect performance if it is in the swap partition.
But if the six processes use more than the 10GB of ram plus the 3 GB of swap, the out of memory killer will kill something.
As I said in the other discussion, 3GB of swap is not enough margin for error. I suggest significantly increasing the swap size.
Can you explain the following command? What does the 'vm' do?
/sbin/sysctl vm | grep commit
I believe that the memory the simulations are using will not change. After running for some time they reach a steady state. However I have to run them for different input parameters. If I change the input parameters the used memory will change.
What I semi understand from what I read about the default over commit rules implies the kernel doesn't actually care how much it already committed to other processes when deciding whether to commit more. It just cares how much is unused at that moment and how much the one process making the request wants.
I'm far from sure I understand that correctly, but your large value for Committed_AS supports that view.
In your situation, if you don't modify vm.overcommit_memory you would need the excess (unused) swap space to be at least as large as the largest single task's excess (committed but unused) memory.
The processes you are now describing seem to be actually using 1.9GB and committing nearly 8.9GB. If you had six of those, you might expect them to use about 9.5GB of ram plus 1.9GB of swap. Then you would need a minimum of 7GB of extra swap space to satisfy the commit limit heuristic rules (as I think I might understand them).
So increasing swap space from the current 2.9GB up to 9GB or more ought to let you run those six copies of the current task, and probably they would swap little enough that running six at a time would be faster than running fewer at a time more times.
As I said before, I think changing vm.overcommit_memory to 1 would also let you run six at once without needing you to increase swap space. But that plan has far less margin for error if the processes are a little bigger than you think.
Quote:
Can you explain the following command? What does the 'vm' do?
/sbin/sysctl vm | grep commit
I assume it didn't work.
On the system where I tried it before suggesting it, it listed the values of every parameter (there were only two) whose names started with vm. and included commit
But on the system where I am now, it just didn't work. I was hoping to see any vm.*commit* parameters your system might have without needing to learn sysctl well enough to ask for a full list.
The one that really matters is vm.overcommit_memory
You can get its value with
/sbin/sysctl vm.overcommit_memory
and after switching to be root you can change its value with
/sbin/sysctl vm.overcommit_memory=1
Seeing if there were other vm.*commit* parameters was just a matter of curiosity. I don't know how much that might vary by kernel or distribution, but I have no specific reason to expect any relevant info there.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.