Need to increase shared memory on Fedora 10
Hi All
I need to increase the shared memory on my Fedora 10 system as I am getting an error running some software I've installed. Whilst not a complete newbie, please treat me as one for this question because it goes beyond the realms of configuration I have ever done! I'm also mainly an Ubuntu user and find that Fedora support is very limited. A colleague who looked at the output log of the software said I needed to increase shared memory size. I have had a look online for some help as usual but have been unable to find anything terribly clear, most of what I have found is people moaning about changes which have been made to the configuration of Fedora 10! Please could someone explain in simple terms the step-by-step process for what I need to do! :-) Thank you :-) Emma |
Quote:
Quote:
What error? If you tell us the real problem, we can tell you a real solution. Quote:
Maybe he is nearly as confused as you are, or maybe you just misunderstood him. It could have been very reasonable advice if he was talking about a specific parameter of the specific application you were trying to use. It makes no sense if he was talking about a parameter of Linux itself. Quote:
|
Presumably he is suggesting you need to address the kernel parameters related to shared memory. These are in /etc/sysctl.conf for example:
# Controls the maximum shared segment size, in bytes kernel.shmmax = 4294967295 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 268435456 Parameters can be modified there or on the fly with sysctl command. However, as the prior poster suggests you really need to provide more detail as to what errors you are seeing, what application you are running and possibly other things (e.g. how much physical memory and swap you have) for anyone to give reasonable suggestions. Modifying kernel parameters without knowing what you are doing can have serious consequences. The issue may not be that you need more "shared memory" but rather that you need to configure the application to ask for less. On some systems running Oracle I've had to have DBAs reduce SGA size to prevent it from trying to grab more "shared memory" than was possible based on the physcial and swap. |
Quote:
But I still think that is most likely not what Emma's colleague was suggesting. Quote:
Quote:
Given physical and swap limits plus (if 32 bit) virtual limits, I'm still confident it is unlikely that the right answer is increasing a system wide shared memory limit, even though you showed I was wrong about the existence of a system wide parameter. But, just in case it is a strange situation and/or we are wrong, I suggest Emma should also post the current values of kernel.shmmax and kernel.shmall from /etc/sysctl.conf (in addition to what application and what error, physical memory, swap size, architecture, etc.) Then you (or maybe even I) can make a more accurate assessment of whether changing those system wide parameters is plausible. Hopefully then someone here will know what application specific shared memory (or other) setting Emma really should change. |
Quote:
Quote:
|
Quote:
As a reminder, the LQ rules state: Quote:
I'm using 32-bit Fedora 10 on a Virtual Machine. The software I am trying to run is a commercial Synthetic Aperture Radar processor with a java front end, c-shell executable scripts and developed in a variety of languages. It is installed correctly but wont run because it dies with the following error (which I am sure is pretty meaningless to everyone hence why I didn't post it before): Code:
(SCO_SpecCtrlMain:pos) POS_ShrMem: Error in creation of shared memory %1$s Invalid argument Code:
limit stacksize unlimited I have looked at /etc/sysctl.conf. There are three net.ipv4 settings and one kernel.sysrq setting, and that's it. Nothing at all relating to the memory. swapon -s gives the following: Type: Partition Size: 5079032 Used: 56 Priority: -1 The physical size of the machine is 5GB but all my data is stored on an nfs-store. The colleague who said to increase the shared memory also supplied the user manual. I need 4GB of - and I quote - "shared memory". This colleague also said they only increased theirs by a factor of four, not to 4GB. All I want to know is how to increase this "shared memory". I would be very grateful for your constructive assistance. Thank you Emma |
Quote:
Quote:
Quote:
Quote:
Quote:
Did you try jlightner's suggestion of the sysctl command? Code:
/sbin/sysctl kernel.shmall Quote:
Quote:
Quote:
Quote:
Quote:
|
Apology accepted! :)
I know I didn't supply much information, as a rule I do, but in this case I really didn't know what information could possibly help and was hoping to take a lead from the first responses. I forgot all about that command - it was a bit of a data deluge situation, the result is: kernel.shmall = 2097152 and I tried it for the other parameter too which gives: kernel.shmmax = 105984000 I have to confess I just added shmmax to the /etc/sysctl.conf between posts and after taking a snaphot of my system just to see if putting 4 times the result I got from swapon made any difference (didn't touch shmall. So far the process still fails with the same error. I can roll it back at any time if either of you need me to! The 5GB is the size of this VM, i.e. what I can see the total size is if I do a df, as if it was the hard-drive. VMs are still pretty magical to me - I just use them! Cheers |
Just a guess, but I remember a while back (switched to 64 bit a couple of years ago) that I had to set ulimit to run certain applications correctly. No idea if this applies or not.
|
Quote:
Oh - It just completed successfully!!! Oh - one final note, this VM is only in existence to run this software so any system changes will not impact anything else. Thank you and good night. |
Quote:
What about that swapon command? Was that inside or outside the VM? I think you're saying the whole virtual disk drive of the VM is just 5GB, so I think it couldn't have much if any swap space. I also don't see where you said how much physical ram the VM thinks it has. Maybe a free command inside the VM would clear up our understanding. Since I'm not used to tiny VM's, I'm not used to tiny values like kernel.shmmax = 105984000 I don't know whether that is way too small (meaning the desired shared region is some weird mapping that doesn't need physical back store) or whether that shmmax represents more fundamental limits in the VM and you just haven't given the VM enough resources to run the task you want to run. Edit: I see you solved it while I was typing the above. Sorry about my errors that expanded this thread uselessly outside what you asked. In effect you asked a question whose direct answer was that kernel.shmmax parameter. We didn't really need to know any more than you provided in the first post. I just never knew that parameter could be a significant limit. |
Many of the tunables for sysctl have "defaults" so may not appear in sysctl.conf at all. You can add them to override the defaults though.
Running sysctl makes the change to running environment. Putting it into sysctl.conf insures you get the same parameter after a reboot. Also by doing something like "sysctl -p /etc/sysctl.conf" you can verify what you put in the file is valid because this sets variables based on reading the file just as is done at a boot. You can run sysctl -a to see available variables. Typing "man sysctl" and "man sysctl.conf will give you more information. The messages you posted do have some meaning as they do clearly indicate the issue is with Shared Memory (we could guess from "ShrMem" but don't have to because they clearly say things like "Unable to create share memory" Note that it isn't always the size of shared memory segments (shmmax) but sometimes is the quantity of shared memory segments. For Shared Memory related parameters you can do: sysctl -a |grep -i shm - which might produce something like this. kernel.shmmni = 4096 kernel.shmall = 268435456 kernel.shmmax = 4294967295' Shmmni is shared memory identifiers and it could be this parameter is too low (though I've not seen that in a long time). As you can see even though I only had two Shared Memory parameters in my sysctl.conf the above command displayed 3 which lets us know the 3rd one is at the default. There are other reasons you could get shared memory errors even if the kernel and the application are configured correctly: 1) Many programs require a shared memory segment at a specific memory address. If something is already using that address (or spans a range that includes that address) it won't start. Most often this occurs because another copy of the application is running or wasn't stopped correctly (e.g. a kill -9 was done to its process) so the segment is still in the Interprocess Communication (IPC) table. You can examine what is in shared memory with the "ipcs -ma" command. (There are other IPC structures as well - see "man ipcs" for more detail.) You can even remove shared memory segments with ipcrm though you have to be very certain the sgement is no longer in use because it will crap out an application to remove the segment it is depending on. 2) Segments have to be contiguous and you are asking for one that would fit into overall memory but doesn't have enough contiguous space. Often you can solve this by insuring the application requiring the largest segment is started before those requiring smaller ones. |
Hi Everyone
Thank you all very much for your help with this. I have certainly learnt a lot that I may well forget but will always be able to find on here later on! Lazlow - yes, you are right too, I also had to check some of the limits. For the record, I found the other settings and realised I've been caught out in a lie - it's apparently 64-bit Fedora. Memory is 4012MB and Memory Overhead (whatever that is) is 250.04MB. Cheers Emma |
All times are GMT -5. The time now is 12:44 PM. |