LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 11-29-2010, 01:31 PM   #1
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Rep: Reputation: 58
Memory Problem - RHEL 5.5 32bit w/PAE - oom_killer


So lately I was given a machine to work on which is RHEL 5.5. The apps it runs is 32bit so that is what it is and was installed with the PAE kernel so it can use all 8GB of ram.

The problem is the box is randomly crashing.
So looking through the logs I see things like:

Quote:
kernel: Out of memory: Killed process 2715 (dbus-daemon)
And then more saying it invoked oom_killer to kill the process.

I tried dumping commands to logs to see if I can find which process is using the memory up but all of them stayed below 1.4%. So reading up a little bit I learned that:

"An OOM (Out Of Memory) error is what happens when the kernel runs out of memory in its own internal pools and is unable to reclaim memory from any other sources."

And that the kernel is suppose to use almost all of your physical ram. I already assumed that because that is how it has always been but now it's confirmed.

Anyway, so I looked at a working box's /proc/meminfo and it says:

Quote:
MemTotal: 7643136 kB
MemFree: 184356 kB
Then I looked at the crashing box and saw:

Quote:
MemTotal: 8303320 kB
MemFree: 7119372 kB
Which doesn't seem normal to me.

I'm not quite sure how to proceed on troubleshooint / fixing the problem. I'm guessing running a non PAE kernel would probably keep it from crashing but will limit the ram. I may test this while waiting on replies. The other option is running RHEL 64bit and installing the 32bit libraries to run the software.

Thanks,

Last edited by nomb; 11-29-2010 at 01:37 PM.
 
Old 11-29-2010, 01:43 PM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
How much swap space does the system have? If it has any, how much is it using?

There is a good chance you are describing symptoms of having too little swap space. Increasing the amount of swap space should be a very easy fix.

Quote:
Originally Posted by nomb View Post
The apps it runs is 32bit so that is what it is and was installed with the PAE kernel so it can use all 8GB of ram.
32bit with PAE should be a very good environment for running 32bit apps on an 8GB system. A 64bit kernel might be better.

Quote:
"An OOM (Out Of Memory) error is what happens when the kernel runs out of memory in its own internal pools and is unable to reclaim memory from any other sources."
Where did you read that? It doesn't sound correct to me.

Quote:
And that the kernel is suppose to use almost all of your physical ram.
Are we talking about the same thing here? What the kernel uses for itself? In a 32 bit system that is generally limited to 1GB.

With just 8GB of ram, exceeding kernel virtual memory limits would take either a very strange workload or some kind of resource leak.

I think it is more likely that kernel virtual memory is OK and you just need more swap space.

Quote:
Then I looked at the crashing box and saw:

MemTotal: 8303320 kB
MemFree: 7119372 kB
But that was after some processes had been killed, correct?

Quote:
I'm guessing running a non PAE kernel would probably keep it from crashing
No. Whatever the problem is, that would just make it fail sooner.

Quote:
The other option is running RHEL 64bit and installing the 32bit libraries to run the software.
If some resource leak were managing to exhaust kernel virtual memory, that would give you a lot more time to figure it out while the system is degrading rather than failing.

There are a lot of places to look to help figure out the problem, but you generally need to look after the system is sick but before it is killing processes. I can't guess how hard it will be to find the right time to look.

Last edited by johnsfine; 11-29-2010 at 01:46 PM.
 
Old 11-29-2010, 02:01 PM   #3
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
Quote:
Originally Posted by johnsfine View Post
How much swap space does the system have? If it has any, how much is it using?

There is a good chance you are describing symptoms of having too little swap space. Increasing the amount of swap space should be a very easy fix.
It has 4GB of swap 4GB free.

Quote:
Originally Posted by johnsfine View Post
Where did you read that? It doesn't sound correct to me.
From here http://linux-mm.org/OOM.

Quote:
Originally Posted by johnsfine View Post
Are we talking about the same thing here? What the kernel uses for itself? In a 32 bit system that is generally limited to 1GB.
I just remember reading somewhere that almost all of your physical ram should be in use.

Quote:
Originally Posted by johnsfine View Post
But that was after some processes had been killed, correct?
Fresh boot, nothing killed, but also nothing running.
 
Old 11-29-2010, 03:20 PM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
I guess I was overreacting to the word "own" in
Quote:
the kernel runs out of memory in its own internal pools
When considering memory issues for a 32 bit Linux system with large physical ram, you need to clearly distinguish between issues with the memory the kernel is using for itself vs. memory the kernel is managing for use by ordinary processes.

Despite the word "own" in that phrase, that article is about memory the kernel is managing for use by ordinary processes, not about memory the kernel is using for itself.

Quote:
I just remember reading somewhere that almost all of your physical ram should be in use.
After enough accesses have been made to different parts of the filesystem (since the last reboot) it is typical to have almost all ram that is not used for more important things used for cache, so very little is free.

Quote:
Fresh boot, nothing killed, but also nothing running.
So relative to your comment about "ram should be in use", that doesn't apply because not much has happened since last reboot.

Relative to my comment about processes killed, all the processes have been killed since the last time memory was filled up, so none of the stats tell you about what filled memory.

The stats in /proc/meminfo (most of them together, not just the top two alone) would tell you a lot about the nature of the problem, but only if you look at those stats after the problem has started and before the problem has gotten bad enough to kill processes. I don't know enough about your use of your system nor about your problem to give you any better advice on how you grab those stats at a useful moment. I could help you interpret them if you did grab them at a useful moment.

Quote:
It has 4GB of swap 4GB free.
I assume you mean 4GB of swap is free when the system hasn't done any significant work since the last reboot. That means nothing. What matters is how much is free when the system has a realistic load on it.

But anyway, how hard is it for you to increase the swap space? I still think a shortage of swap space is closely related to your problem. If you had more swap space that might completely fix the problem. If the problem can't be fixed by more swap space, it is still very likely that more swap space will extend the time from when the problem could be understood from /proc/meminfo contents to the time the failure occurs. So more swap space would increase the chances that you are able to look at /proc/meminfo at the right time.

Last edited by johnsfine; 11-29-2010 at 03:27 PM.
 
Old 11-29-2010, 04:52 PM   #5
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,119

Rep: Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120Reputation: 4120
Linux kernels divide the memory into zones - this is the likely problem. See the comment on the linked article re "low memory usage" (same applies to zone DMA). The oom messages should show all the zone usage(s) at the time of the error. Maybe also try the script at the bottom of the same article - looks like it saves everything relevant (run it as root).

If you can go with a 64-bit kernel do so - will save a lot of angst.
 
Old 11-30-2010, 09:01 AM   #6
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
I'm currently working with Red Hat support. They are thinking I need to increase my lower memory size and send me some good values to use. I'll let you guys know if this solves the issue.

When I was watching the memory on the box during a crash, it was hardly used. Kinda threw me off being it was giving and out of memory error. However, since the memory is split I understand it as higher is for application and lower is for the kernel. Red Hat confirmed that the oom_killer gets invoked when the kernel's memory pool is running out of memory which is probably why they are having me adjust the lower memory. We'll see how it goes.

While I'm working with Red Hat I'm going to increase the swap as per you suggestion and run a script to dump meminfo so hopefully we can spot what is going on.

Last edited by nomb; 11-30-2010 at 09:04 AM.
 
Old 11-30-2010, 09:12 AM   #7
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by nomb View Post
I'm currently working with Red Hat support. They are thinking I need to increase my lower memory size and send me some good values to use. I'll let you guys know if this solves the issue.
As a Centos user, I'm very glad Red Hat exists. But when I read things like the above, I'm also glad I don't use Red Hat.

Unless you gave them very different info than you gave us, their diagnosis is unlikely to be correct (even though syg00 seems to be reaching roughly the same diagnosis as Red Hat support).

Quote:
I'm going to increase the swap as per you suggestion and run a script to dump meminfo so hopefully we can spot what is going on.
Good plan. Please remember to tell us how it works out.

Last edited by johnsfine; 11-30-2010 at 09:15 AM.
 
Old 11-30-2010, 12:05 PM   #8
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
While I'm waiting on the box to crash, I thought I'd share what Red Hat told me to do:

Add the following to /etc/sysctl.conf:
Code:
vm.lower_zone_protection = 100            # initial: non exsistent
vm.lowmem_reserve_ratio = 256 256 9       # initial: 256 256 32
vm.swappiness = 80                        # initial: 60
vm.min_free_kbytes = 19000                # initial: 3831
vm.dirty_expire_centisecs = 2000          # initial: 2999
vm.dirty_writeback_centisecs = 400        # initial: 499
vm.vfs_cache_pressure = 200               # initial: 100

Last edited by nomb; 11-30-2010 at 12:33 PM.
 
Old 11-30-2010, 01:41 PM   #9
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by nomb View Post
While I'm waiting on the box to crash
If the system is getting significant use, you should also post a full /proc/meminfo to this thread as a baseline (what it looks like after significant use has started but before problems appear to have started).
 
Old 11-30-2010, 02:33 PM   #10
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
Ok I was lucky enough to get the /proc/meminfo information during one of the crashes. I grabbed the information once per second. I used xargs to put it all on one line because I was pulling it into a splunk server at the same time.

I am still waiting on an OK to change the swap size btw.

You'll notice there is an hour difference between the last two entries. That is when the box froze and then I restarted it.

Thanks again.

nomb
Attached Files
File Type: txt memory.txt (233.9 KB, 31 views)
 
Old 11-30-2010, 02:47 PM   #11
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
I think that indicates the problem is in the memory the kernel is using itself (as opposed to the memory the kernel is managing for other processes).

It also indicates I was wrong about swap space. More swap space won't help. It won't even slow the problem down to get a better look.

A 64 bit kernel would slow the problem down a LOT and maybe fix it. But switching to a 64 bit kernel is a big change.

The info that will tell you more details about the problem is in /proc/slabinfo
That is bigger and harder to understand than /proc/meminfo, but the same general concept applies: Capture it as the system builds toward the problem and you can learn more about the nature of the problem.

The memory problem inside the kernel is almost certainly caused by some kind of resource leak in some application. Watching /proc/slabinfo may give some idea of what kind or resource is leaking, but you still would need to figure out which application is leaking resources and where in the application source code is the bug making it leak resources.

Last edited by johnsfine; 11-30-2010 at 02:56 PM.
 
Old 11-30-2010, 02:58 PM   #12
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
That one is much bigger than the other one so will probably be hard for me to get it in the same way. What do you think I might be looking for?

nomb
 
Old 11-30-2010, 03:30 PM   #13
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
I'm watching with slabinfo.

names_cache and size-32 keep going up. But names_cache is way higher. Right now it is at:

Code:
names_cache - 213092K
size-32     -   5316K
Not really concerned with size-32 it is pretty small. But names_cache keeps growing. For instance now it is: 241356K.
I am not having much luck finding out what that is on google, but does that seem problematic? I'm just sorting by size the problem could be something different.

On a 64bit box, the names_cache is only 300K.

nomb

** update **

Currently:
Code:
names_cache: 570852K
size-32    : 8464K
Code:
names_cache: 777668K
size-32    : 10244K
Box just hung:
Code:
names_cache: 777676K
size-32    : 10244K
Everything else has stayed 4056K or below.

Last edited by nomb; 11-30-2010 at 04:01 PM.
 
Old 11-30-2010, 03:59 PM   #14
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by nomb View Post
names_cache keeps growing. For instance now it is: 241356K.
I am not having much luck finding out what that is on google, but does that seem problematic?
I didn't have much luck with "names_cache" on google either.

names_cache is absolutely at the heart of your problem. You can't allow it to grow anywhere near that big (and still have hopes of a stable 32 bit system). That resource leak seems too severe for even a 64 bit kernel to survive for long.

But I can't help you much with what resource is actually leaking.

Do you know yet which process is driving the resource leak? Your earlier result seems to indicate that the kernel associates the problem resource with a process and when that process is killed, the resource is released.

Hopefully someone who knows more will jump into this thread to help.


Also, despite my generally low opinion of customer support organizations including Red Hat, you should try them as well. You have a key new piece of information that you didn't have before (run away allocation of names_cache seen in slabinfo). Pass that along to them and see what they can make of it.

My limited understanding of the limited info I found via google (so don't trust me on this) says the names_cache is roughly what you might expect, a collection of filenames that the kernel is currently working on. But does that mean one of your applications is stressing the kernel by making it work on absurdly long filenames? I don't know the limits on filename length. More importantly, are the limits enforced by the filesystem after the kernel has already cached a copy of the name in its own address space? Or are limits on name length enforced before the kernel makes a copy of the name? Normally, one would expect something like a name cache to be abused by too many names rather than by names that are individually too long. That might be the case this time. But then how is the name cache so far away bigger than any other kernel memory use? A resource leak involving too many names, each of reasonable length, ought to also involve a larger amount of other kernel memory use. I saw what you said about "size-32" and that probably is an associated memory use, but it isn't big enough to make sense as the associated use if the names are individually reasonable size.

Last edited by johnsfine; 11-30-2010 at 04:19 PM.
 
Old 11-30-2010, 04:10 PM   #15
nomb
Member
 
Registered: Jan 2006
Distribution: Debian Testing
Posts: 675

Original Poster
Rep: Reputation: 58
If you are refering to the memory.txt I posted, it had stayed hung. I had to reboot the box.
I just rebooted another box, same image, into the non-PAE kernel. It's names_cache is also increasing but at a much much slower rate. Perhaps it will hit a point where it gets stable.

Does anyone happen to know what names_cache is? Or how you can find out what is causing it to leak?

I've been keeping the RedHat thread updated as well.

Thanks,
nomb

Last edited by nomb; 11-30-2010 at 04:15 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Duplicating a RHEL 5.1 32bit server on RHEL 5.4 64 bit wernox Red Hat 1 12-09-2009 02:15 PM
java Xmx limitation on 32bit with PAE enabled michelangelo Programming 3 11-24-2009 07:24 PM
Suse 32bit PAE enabled. C program runs with 3.5Gb but java cannot use more than 1.8Gb michelangelo SUSE / openSUSE 2 11-24-2009 05:27 PM
Out of memory on 16gb ram, PAE 32bit 2.6.23 heson Linux - Server 4 11-04-2007 05:54 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 03:45 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration