Fedora This forum is for the discussion of the Fedora Project. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
06-16-2009, 08:11 AM
|
#1
|
Member
Registered: Jun 2009
Location: Spain
Distribution: Various, Ubuntu, Fedora, Open Solaris, Solaris, RHEL, CentOS
Posts: 64
Rep:
|
Need to increase shared memory on Fedora 10
Hi All
I need to increase the shared memory on my Fedora 10 system as I am getting an error running some software I've installed.
Whilst not a complete newbie, please treat me as one for this question because it goes beyond the realms of configuration I have ever done! I'm also mainly an Ubuntu user and find that Fedora support is very limited.
A colleague who looked at the output log of the software said I needed to increase shared memory size.
I have had a look online for some help as usual but have been unable to find anything terribly clear, most of what I have found is people moaning about changes which have been made to the configuration of Fedora 10!
Please could someone explain in simple terms the step-by-step process for what I need to do!
:-) Thank you :-)
Emma
|
|
|
06-16-2009, 08:48 AM
|
#2
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
Quote:
Originally Posted by emmalg
I need to increase the shared memory on my Fedora 10 system
|
Maybe someone else here will have a clue what you're talking about. But my best guess is they won't. I don't think you're asking a meaningful question. I don't think the phrase "increase the shared memory" means anything when applied to a Linux system.
Quote:
as I am getting an error running some software I've installed.
|
What software?
What error?
If you tell us the real problem, we can tell you a real solution.
Quote:
A colleague who looked at the output log of the software said I needed to increase shared memory size.
|
But he didn't tell you how to do that?
Maybe he is nearly as confused as you are, or maybe you just misunderstood him. It could have been very reasonable advice if he was talking about a specific parameter of the specific application you were trying to use. It makes no sense if he was talking about a parameter of Linux itself.
Quote:
Please could someone explain in simple terms the step-by-step process for what I need to do!
|
Only after someone understands what you're trying to do.
Last edited by johnsfine; 06-16-2009 at 08:52 AM.
|
|
|
06-16-2009, 09:08 AM
|
#3
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Presumably he is suggesting you need to address the kernel parameters related to shared memory. These are in /etc/sysctl.conf for example:
# Controls the maximum shared segment size, in bytes
kernel.shmmax = 4294967295
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 268435456
Parameters can be modified there or on the fly with sysctl command.
However, as the prior poster suggests you really need to provide more detail as to what errors you are seeing, what application you are running and possibly other things (e.g. how much physical memory and swap you have) for anyone to give reasonable suggestions. Modifying kernel parameters without knowing what you are doing can have serious consequences.
The issue may not be that you need more "shared memory" but rather that you need to configure the application to ask for less. On some systems running Oracle I've had to have DBAs reduce SGA size to prevent it from trying to grab more "shared memory" than was possible based on the physcial and swap.
|
|
|
06-16-2009, 09:31 AM
|
#4
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
Quote:
Originally Posted by jlightner
Presumably he is suggesting you need to address the kernel parameters related to shared memory. These are in /etc/sysctl.conf
|
I sure guessed wrong about that. I didn't know there were system wide parameters for shared memory.
But I still think that is most likely not what Emma's colleague was suggesting.
Quote:
(e.g. how much physical memory and swap you have)
|
We also probably need to know if the kernel is 32 bit vs. x86-64. Assuming the kernel is x86-64 is the application 32 bit or x86-64.
Quote:
The issue may not be that you need more "shared memory" but rather that you need to configure the application to ask for less. On some systems running Oracle I've had to have DBAs reduce SGA size to prevent it from trying to grab more "shared memory" than was possible based on the physcial and swap.
|
Your Oracle experience obviously exceeds my reading about it and never actually trying it. I would have guessed the opposite for the needed application specific parameter adjustment (adjust the application limit up to make good use of the larger and reasonable system wide limit, rather than adjust the application limit down to fit within the system limit).
Given physical and swap limits plus (if 32 bit) virtual limits, I'm still confident it is unlikely that the right answer is increasing a system wide shared memory limit, even though you showed I was wrong about the existence of a system wide parameter.
But, just in case it is a strange situation and/or we are wrong, I suggest Emma should also post the current values of kernel.shmmax and kernel.shmall from /etc/sysctl.conf (in addition to what application and what error, physical memory, swap size, architecture, etc.) Then you (or maybe even I) can make a more accurate assessment of whether changing those system wide parameters is plausible.
Hopefully then someone here will know what application specific shared memory (or other) setting Emma really should change.
Last edited by johnsfine; 06-16-2009 at 09:34 AM.
|
|
|
06-16-2009, 10:49 AM
|
#5
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Quote:
Given physical and swap limits plus (if 32 bit) virtual limits, I'm still confident it is unlikely that the right answer is increasing a system wide shared memory limit, even though you showed I was wrong about the existence of a system wide parameter.
|
In fact for DBMS solutions this is often exactly what needs to be done and most of the major vendors give recommendations for setting the two variables I listed (as well as others) for optimal run. If you see a shmget error it is often due to such constraints. By default shmmax is not typically optimized for DBs so every system I've installed Oracle or Sybase DB on I've usually tweaked that parameter and others based on such recommendations.
Quote:
I would have guessed the opposite for the needed application specific parameter adjustment (adjust the application limit up to make good use of the larger and reasonable system wide limit, rather than adjust the application limit down to fit within the system limit).
|
I was not implying one doesn't usually tweak kernel to fit application but rather noting that it is possible that the application is misconfigured to use all (or more) than is available on a given system. That is to say if you've adjusted shmmax to allow for 16 GB shared memory segments and they've set SGA to 20 GB you're likely to have issues. Also while you might want to use "nearly all" of the resources for a given application (especially if the server is dedicated to that application) you can't really use "all" because there are system processes and ancillary application processes that while small in their resource consumption do in fact require SOME resources to run.
|
|
|
06-16-2009, 11:23 AM
|
#6
|
Member
Registered: Jun 2009
Location: Spain
Distribution: Various, Ubuntu, Fedora, Open Solaris, Solaris, RHEL, CentOS
Posts: 64
Original Poster
Rep:
|
Quote:
Originally Posted by johnsfine
Maybe someone else here will have a clue what you're talking about. But my best guess is they won't. I don't think you're asking a meaningful question. I don't think the phrase "increase the shared memory" means anything when applied to a Linux system.
What software?
What error?
If you tell us the real problem, we can tell you a real solution.
But he didn't tell you how to do that?
Maybe he is nearly as confused as you are, or maybe you just misunderstood him. It could have been very reasonable advice if he was talking about a specific parameter of the specific application you were trying to use. It makes no sense if he was talking about a parameter of Linux itself.
Only after someone understands what you're trying to do.
|
Politeness never hurt anyone. I am not a newbie to Linux and have been using it for 6+ years for developing code, and using it at home, this just happens to be the first time I have come across such a problem - nevertheless your tone REALLY offended me, I am actually shaking. If I was a newbie, I can guarantee that this would be my last experience of Linux.
As a reminder, the LQ rules state:
Quote:
This is *not* your average Linux forum. We are proud of the fact that despite of our growing numbers we continue to remain extremely friendly to both the newbie and the expert. When posting in the forum keep the following in mind:
...
* Personal attacks on others will not be tolerated.
...
* Do not post if you do not have anything constructive to say in the post.
...
* Challenge others' points of view and opinions, but do so respectfully and thoughtfully ... without insult and personal attack. Differing opinions is one of the things that make this site great.
|
Right, now down to business.
I'm using 32-bit Fedora 10 on a Virtual Machine.
The software I am trying to run is a commercial Synthetic Aperture Radar processor with a java front end, c-shell executable scripts and developed in a variety of languages. It is installed correctly but wont run because it dies with the following error (which I am sure is pretty meaningless to everyone hence why I didn't post it before):
Code:
(SCO_SpecCtrlMain:pos) POS_ShrMem: Error in creation of shared memory %1$s Invalid argument
(SCO_SpecCtrlMain:sif) FATAL error in SIF_ShrMem.cc at line 177 - Unable to create share memory
(SWI_SpecProcMain:swi) Specan Processing Started
(SWI_SpecProcMain:pos) POS_ShrMem: Error in attaching to shared memory %1$s Invalid argument
(SWI_SpecProcMain:pos) POS_ShrMem: Error in creation of shared memory %1$s Invalid argument
(SWI_SpecProcMain:sif) FATAL error in SIF_ShrMem.cc at line 153 - Unable to create share memory
(SWI_SpecProcMain:sif) FATAL error in SWI_SpecProcMain.cc at line 206 - Unable to allocate general buffer set
I have two colleagues both based at different sites who have each managed to get this working, it is their advice I am following. One said to increase the shared memory after reading this the other to put this:
Code:
limit stacksize unlimited
into my .cshrc file, I didn't have one so I added it to the /etc/csh.cshrc instead as this is read first but it made no difference.
I have looked at /etc/sysctl.conf. There are three net.ipv4 settings and one kernel.sysrq setting, and that's it. Nothing at all relating to the memory.
swapon -s gives the following:
Type: Partition
Size: 5079032
Used: 56
Priority: -1
The physical size of the machine is 5GB but all my data is stored on an nfs-store.
The colleague who said to increase the shared memory also supplied the user manual. I need 4GB of - and I quote - "shared memory". This colleague also said they only increased theirs by a factor of four, not to 4GB.
All I want to know is how to increase this "shared memory". I would be very grateful for your constructive assistance.
Thank you
Emma
|
|
|
06-16-2009, 11:47 AM
|
#7
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
Quote:
Originally Posted by emmalg
Politeness never hurt anyone.
|
Sorry, I did not intend to offend. When a question doesn't provide the info needed to allow a constructive answer, I usually don't know a way to be both polite and informative about what should be changed. So I just try for informative.
Quote:
Originally Posted by johnsfine
I don't think the phrase "increase the shared memory" means anything when applied to a Linux system.
|
And on occasion I'm wrong. I haven't seen a Linux system on which the system wide limit on shared memory is low enough to matter (low enough that other limits wouldn't always hit first). But now I know that isn't always the case.
Quote:
I'm using 32-bit Fedora 10 on a Virtual Machine.
|
So far as I understand, that should mean 4GB is an absolute limit on the size of a shared memory segment. On the 32 bit systems I've used, 4GB is also the default limit. But I don't know if the Virtual Machine and/or something else in your configuration makes that different.
Quote:
which I am sure is pretty meaningless to everyone hence why I didn't post it before
|
I see your point, and you're probably right. But sometimes you need to anticipate that a question is so likely to be asked, that even if the answer is useless you might as well save time by providing it.
Quote:
I have looked at /etc/sysctl.conf. There are three net.ipv4 settings and one kernel.sysrq setting, and that's it. Nothing at all relating to the memory.
|
I'm surprised. I didn't know kernel.shmall and kernel.shmmax were supposed to be there before reading this thread. But now I've checked a few systems and on those systems those values are there (and shmmax is 4GB on the 32bit systems and larger than physically possible on the 64 bit systems).
Did you try jlightner's suggestion of the sysctl command?
Code:
/sbin/sysctl kernel.shmall
to display the value. If it isn't in the file, I don't know whether the command would see it either, but might as well try.
Quote:
swapon -s gives the following:
Type: Partition
Size: 5079032
Used: 56
Priority: -1
|
Is that on the base system or inside the virtual system? That 5GB is very likely plenty if it is inside the virtual system. It isn't really relevant if it is the outer system.
Quote:
The physical size of the machine is 5GB
|
There also, it is significant what physical memory the virtual OS gets, not what the containing system has.
Quote:
Originally Posted by emmalg
after reading this the other to put this:
Code:
limit stacksize unlimited
into my .cshrc file, I didn't have one so I added it to the /etc/csh.cshrc instead as this is read first but it made no difference.
|
"unlimited" is rather a strange concept for stacksize, because the loader must make some layout decisions in the 32 bit virtual address space that constrain the stack and/or the heap and/or mappings such as shared memory. Where I have had issues with stacksize, I always needed to understand the issue and then set a specific size with a limit command. So if your stacksize should be fixed, I think it must be fixed in a better way. But I don't see anything here that makes me believe stacksize is part of the problem. I think your colleague is mistaken.
Quote:
All I want to know is how to increase this "shared memory". I would be very grateful for your constructive assistance.
|
Maybe jlightner already gave you that answer by suggesting the sysctl program and telling you the names of the two relevant parameters. It seems to be a pretty easy program to use, but I think his other advice is operative until you check at least a few more details:
Quote:
Originally Posted by jlightner
Modifying kernel parameters without knowing what you are doing can have serious consequences.
|
Last edited by johnsfine; 06-16-2009 at 12:09 PM.
|
|
|
06-16-2009, 12:00 PM
|
#8
|
Member
Registered: Jun 2009
Location: Spain
Distribution: Various, Ubuntu, Fedora, Open Solaris, Solaris, RHEL, CentOS
Posts: 64
Original Poster
Rep:
|
Apology accepted!
I know I didn't supply much information, as a rule I do, but in this case I really didn't know what information could possibly help and was hoping to take a lead from the first responses.
I forgot all about that command - it was a bit of a data deluge situation, the result is:
kernel.shmall = 2097152
and I tried it for the other parameter too which gives:
kernel.shmmax = 105984000
I have to confess I just added shmmax to the /etc/sysctl.conf between posts and after taking a snaphot of my system just to see if putting 4 times the result I got from swapon made any difference (didn't touch shmall. So far the process still fails with the same error. I can roll it back at any time if either of you need me to!
The 5GB is the size of this VM, i.e. what I can see the total size is if I do a df, as if it was the hard-drive. VMs are still pretty magical to me - I just use them!
Cheers
|
|
|
06-16-2009, 12:14 PM
|
#9
|
Senior Member
Registered: Jan 2006
Posts: 4,363
Rep:
|
Just a guess, but I remember a while back (switched to 64 bit a couple of years ago) that I had to set ulimit to run certain applications correctly. No idea if this applies or not.
|
|
|
06-16-2009, 12:16 PM
|
#10
|
Member
Registered: Jun 2009
Location: Spain
Distribution: Various, Ubuntu, Fedora, Open Solaris, Solaris, RHEL, CentOS
Posts: 64
Original Poster
Rep:
|
Quote:
Originally Posted by emmalg
kernel.shmmax = 105984000
I have to confess I just added shmmax to the /etc/sysctl.conf between posts and after taking a snaphot of my system just to see if putting 4 times the result I got from swapon made any difference (didn't touch shmall. So far the process still fails with the same error. I can roll it back at any time if either of you need me to!
|
Just multiplied that by 2 and started the process running. So far it hasn't failed. I'm off home now, but please let me know if I've done a bad thing by altering that with no regard to the consequences! I'll check the thread tomorrow, so please let me know if there's a more controlled manner in which I ought to make these changes!
Oh - It just completed successfully!!!
Oh - one final note, this VM is only in existence to run this software so any system changes will not impact anything else.
Thank you and good night.
|
|
|
06-16-2009, 12:20 PM
|
#11
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
Quote:
Originally Posted by emmalg
The 5GB is the size of this VM, i.e. what I can see the total size is if I do a df, as if it was the hard-drive. VMs are still pretty magical to me - I just use them!
|
OK, they're magical to me to and I don't even use them, so this answer is just some guesswork while waiting for someone who knows to answer.
What about that swapon command? Was that inside or outside the VM? I think you're saying the whole virtual disk drive of the VM is just 5GB, so I think it couldn't have much if any swap space.
I also don't see where you said how much physical ram the VM thinks it has. Maybe a free command inside the VM would clear up our understanding.
Since I'm not used to tiny VM's, I'm not used to tiny values like
kernel.shmmax = 105984000
I don't know whether that is way too small (meaning the desired shared region is some weird mapping that doesn't need physical back store) or whether that shmmax represents more fundamental limits in the VM and you just haven't given the VM enough resources to run the task you want to run.
Edit: I see you solved it while I was typing the above. Sorry about my errors that expanded this thread uselessly outside what you asked. In effect you asked a question whose direct answer was that kernel.shmmax parameter. We didn't really need to know any more than you provided in the first post. I just never knew that parameter could be a significant limit.
Last edited by johnsfine; 06-16-2009 at 12:26 PM.
|
|
|
06-16-2009, 01:24 PM
|
#12
|
LQ Guru
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
|
Many of the tunables for sysctl have "defaults" so may not appear in sysctl.conf at all. You can add them to override the defaults though.
Running sysctl makes the change to running environment. Putting it into sysctl.conf insures you get the same parameter after a reboot. Also by doing something like "sysctl -p /etc/sysctl.conf" you can verify what you put in the file is valid because this sets variables based on reading the file just as is done at a boot.
You can run sysctl -a to see available variables.
Typing "man sysctl" and "man sysctl.conf will give you more information.
The messages you posted do have some meaning as they do clearly indicate the issue is with Shared Memory (we could guess from "ShrMem" but don't have to because they clearly say things like "Unable to create share memory"
Note that it isn't always the size of shared memory segments (shmmax) but sometimes is the quantity of shared memory segments. For Shared Memory related parameters you can do:
sysctl -a |grep -i shm - which might produce something like this.
kernel.shmmni = 4096
kernel.shmall = 268435456
kernel.shmmax = 4294967295'
Shmmni is shared memory identifiers and it could be this parameter is too low (though I've not seen that in a long time). As you can see even though I only had two Shared Memory parameters in my sysctl.conf the above command displayed 3 which lets us know the 3rd one is at the default.
There are other reasons you could get shared memory errors even if the kernel and the application are configured correctly:
1) Many programs require a shared memory segment at a specific memory address. If something is already using that address (or spans a range that includes that address) it won't start. Most often this occurs because another copy of the application is running or wasn't stopped correctly (e.g. a kill -9 was done to its process) so the segment is still in the Interprocess Communication (IPC) table. You can examine what is in shared memory with the "ipcs -ma" command. (There are other IPC structures as well - see "man ipcs" for more detail.)
You can even remove shared memory segments with ipcrm though you have to be very certain the sgement is no longer in use because it will crap out an application to remove the segment it is depending on.
2) Segments have to be contiguous and you are asking for one that would fit into overall memory but doesn't have enough contiguous space. Often you can solve this by insuring the application requiring the largest segment is started before those requiring smaller ones.
Last edited by MensaWater; 06-16-2009 at 01:52 PM.
|
|
|
06-17-2009, 03:19 AM
|
#13
|
Member
Registered: Jun 2009
Location: Spain
Distribution: Various, Ubuntu, Fedora, Open Solaris, Solaris, RHEL, CentOS
Posts: 64
Original Poster
Rep:
|
Hi Everyone
Thank you all very much for your help with this. I have certainly learnt a lot that I may well forget but will always be able to find on here later on!
Lazlow - yes, you are right too, I also had to check some of the limits.
For the record, I found the other settings and realised I've been caught out in a lie - it's apparently 64-bit Fedora. Memory is 4012MB and Memory Overhead (whatever that is) is 250.04MB.
Cheers
Emma
|
|
|
All times are GMT -5. The time now is 08:31 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|