LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (http://www.linuxquestions.org/questions/linux-general-1/)
-   -   OOM killer even though there is memory available (http://www.linuxquestions.org/questions/linux-general-1/oom-killer-even-though-there-is-memory-available-917400/)

Bilb 12-06-2011 09:46 AM

OOM killer even though there is memory available
 
We currently have two identical servers both with the exact same set up with server A set up and server B a recovery of server A to have the software also identical.

Both servers are running a single Xeon CPU with 18GB ram and 8GB swap

Server A is running perfect fine with no issues.

Server B seems to be a lot slower in writing data to disk.
As a test in this, I have performed a timed dd of /dev/zero on both servers which shown that this was the case and not just what users were saying.

It then comes more interesting if I increase this 8GB zero file to 16GB. On server A this is no problem and complete under 2mins with sync included. On server B the server when it uses around 13GB of RAM in total (going up due to caching) it them starts the OOM killer even though there is still plenty of free RAM and the swap space hasn't even been touched?

I am running a memtest86+ on server B at the moment however this is basically finished and is not showing any errors.

The OOM message is:
Code:

tnslsnr invoked oom-killer: gfp_mask=0xd0, order=0, oomkilladj=0
 [<c045b55f>] out_of_memory+0x72/0x1a3
 [<c045cae2>] __alloc_pages+0x24e/0x2cf
 [<c0472d00>] cache_alloc_refill+0x275/0x48a
 [<c0472a81>] kmem_cache_alloc+0x41/0x4b
 [<c05c0171>] __alloc_skb+0x2a/0xfe
 [<c05f415c>] tcp_connect+0x181/0x313
 [<c05f73bf>] tcp_v4_connect+0x511/0x616
 [<c0437300>] __wake_up_bit+0x29/0x2e
 [<c0600c6f>] inet_stream_connect+0x7d/0x208
 [<c04f316d>] copy_from_user+0x31/0x5d
 [<c05bbd5c>] sys_connect+0x7d/0xa9
 [<c048c107>] d_alloc+0x151/0x17f
 [<c049b2f1>] inotify_d_instantiate+0xf/0x32
 [<c048b5d5>] d_rehash+0x1c/0x2b
 [<c05babec>] sock_attach_fd+0x6c/0xcc
 [<c0474d0a>] fd_install+0x21/0x50
 [<c05bbec0>] sys_socketcall+0x98/0x1b7
 [<c0407f4b>] do_syscall_trace+0xab/0xb1
 [<c0404f4b>] syscall_call+0x7/0xb
 =======================
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:31
cpu 0 cold: high 62, batch 15 used:52
cpu 1 hot: high 186, batch 31 used:21
cpu 1 cold: high 62, batch 15 used:54
cpu 2 hot: high 186, batch 31 used:170
cpu 2 cold: high 62, batch 15 used:54
cpu 3 hot: high 186, batch 31 used:31
cpu 3 cold: high 62, batch 15 used:54
cpu 4 hot: high 186, batch 31 used:27
cpu 4 cold: high 62, batch 15 used:59
cpu 5 hot: high 186, batch 31 used:4
cpu 5 cold: high 62, batch 15 used:51
cpu 6 hot: high 186, batch 31 used:22
cpu 6 cold: high 62, batch 15 used:57
cpu 7 hot: high 186, batch 31 used:18
cpu 7 cold: high 62, batch 15 used:60
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:58
cpu 0 cold: high 62, batch 15 used:9
cpu 1 hot: high 186, batch 31 used:5
cpu 1 cold: high 62, batch 15 used:1
cpu 2 hot: high 186, batch 31 used:19
cpu 2 cold: high 62, batch 15 used:12
cpu 3 hot: high 186, batch 31 used:27
cpu 3 cold: high 62, batch 15 used:11
cpu 4 hot: high 186, batch 31 used:176
cpu 4 cold: high 62, batch 15 used:5
cpu 5 hot: high 186, batch 31 used:19
cpu 5 cold: high 62, batch 15 used:8
cpu 6 hot: high 186, batch 31 used:37
cpu 6 cold: high 62, batch 15 used:4
cpu 7 hot: high 186, batch 31 used:178
cpu 7 cold: high 62, batch 15 used:7
Free pages:    6047760kB (6040456kB HighMem)
Active:28828 inactive:2945436 dirty:48254 writeback:0 unstable:0 free:1511940 slab:185256 mapped-file:19149 mapped-anon:20616 pagetables:1195
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:12kB present:16384kB pages_scanned:884 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 18927
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 18927
Normal free:3716kB min:3756kB low:4692kB high:5632kB active:2556kB inactive:2496kB present:901120kB pages_scanned:11493 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 144383
HighMem free:6040456kB min:512kB low:19788kB high:39064kB active:112756kB inactive:11779236kB present:18481148kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 15*4kB 109*8kB 6*16kB 0*32kB 0*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3716kB
HighMem: 42*4kB 26*8kB 21*16kB 3762*32kB 2574*64kB 1060*128kB 447*256kB 223*512kB 130*1024kB 49*2048kB 1259*4096kB = 6040456kB
2953648 pagecache pages
Swap cache: add 0, delete 0, find 0/0, race 0+0
Free swap  = 7902232kB
Total swap = 7902232kB
Free swap:      7902232kB
4849663 pages of RAM
4620287 pages of HIGHMEM
172939 reserved pages
2967456 pages shared
0 pages swap cached
48254 pages dirty
0 pages writeback
19149 pages mapped
185256 pages slab
1195 pages pagetables
Out of memory: Killed process 4362, UID 48, (httpd).

If I do not cancel the dd command quick enough the server will hang.

I see from the OOM errors that it reports "DMA32: empty" but why is this only on 1 server and not the other?

I have tried the dd command on different RAIDs that is installed on the server and the two I tried both seem to cause the same problem. They are using the same RAID controller which uses CCISS.

Does anyone have any ideas of what is the cause of this?

johnsfine 12-06-2011 10:00 AM

I don't see any hint in your post about what processes are using the 8GB of swap space and how different the use of swap space and ram might be between the two servers.

The free command will tell you about overall use of swap space. The RES column of top (after you sort on that column) is one of several places you can get a rough idea of memory use per process. Getting info on per process use of swap space may be much harder. Post some of the easier to get memory statistics and that might give some clue about where you need to take a closer look.

You may need to increase swap space. A large amount of unused swap space may be necessary for insurance against OOM during unusual conditions. If you are using a significant fraction of the 8GB of swap, then it is not enough for insurance. Disk space is cheap. Giving more of it to swap space may avoid serious problems.

It sounds like you ALSO have a problem with disk writes on server B. I don't know anything about diagnosing that problem. Slow disk writes could easily be the direct cause of the OOM and could be the only reason the OOM occurs only on one of the two servers. But I think the OOM still indicates either you are using more anonymous memory in some process(es) than you intended, or you configured less swap space than your workload needs for safe operation.

Bilb 12-06-2011 10:20 AM

Thanks for the quick reply.

On both servers during the dd file creation there is none of the 8GB swap space is actually used and on server B when the OOm kicks in, there is still 5-6GB of free RAM as well (going by top). This is part of why I don't the reason for OOM to kick in.

I will get a free report and post when I can redoing the dd test.

Increase the swap space is not an issue, however as mentioned above this is not actually being used at all and free seems to be available.

johnsfine 12-06-2011 11:30 AM

Quote:

Originally Posted by Bilb (Post 4543508)
Increase the swap space is not an issue, however as mentioned above this is not actually being used at all and free seems to be available.

There are cases in Linux (though far more common in Windows) where a larger amount of unused swap space would be required to avoid an out of memory condition, even though actually using that swap space doesn't become necessary (the system doesn't really need the swap space, but it really thinks it needs the swap space).

You can avoid such conditions in Linux by changing the "over commit" settings. It is possible the over commit settings in your server B somehow got changed to non default values that more readily cause OOM. If so, it would be best to set them back to default.

It is also possible that you have some unusual use of address space by some of your processes that (combined with default over commit settings) leads to the system thinking it needs an absurdly large amount of free swap space. In that case some experts (in other threads) have advised adjusting the over commit settings so the system no longer thinks it needs so much swap space. That approach can work well if you know what you're doing, but most people in that situation misunderstand the documentation of over commit settings and get the details wrong. Simply providing the excess swap space that the system thinks it needs may be easier and safer than messing with over commit settings.

Bilb 12-07-2011 03:03 AM

I really do not think the amount of swap space is the issue here. This is only creating a 16GB file.

I have however increased the swap space to 32GB and then retried creating a 16GB file and this is still no better.

Here is the OOM message:
Code:

Free pages:    6085588kB (6078260kB HighMem)
Active:39697 inactive:2921944 dirty:98764 writeback:0 unstable:0 free:1521397 slab:184886 mapped-file:18714 mapped-anon:17820 pagetables:1140
DMA free:3588kB min:68kB low:84kB high:100kB active:28kB inactive:0kB present:16384kB pages_scanned:239240 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 18927
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 18927
Normal free:3740kB min:3756kB low:4692kB high:5632kB active:3860kB inactive:132kB present:901120kB pages_scanned:6750106 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 144383
HighMem free:6078260kB min:512kB low:19788kB high:39064kB active:154900kB inactive:11687516kB present:18481148kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 91*4kB 70*8kB 8*16kB 0*32kB 0*64kB 1*128kB 0*256kB 1*512kB 0*1024kB 1*2048kB 0*4096kB = 3740kB
HighMem: 43*4kB 31*8kB 23*16kB 25*32kB 8*64kB 0*128kB 25*256kB 65*512kB 33*1024kB 977*2048kB 977*4096kB = 6078260kB
2943821 pagecache pages
Swap cache: add 91656, delete 91651, find 32034/36615, race 0+0
Free swap  = 31608928kB
Total swap = 31609056kB
Free swap:      31608928kB
4849663 pages of RAM
4620287 pages of HIGHMEM
172939 reserved pages
2975879 pages shared
5 pages swap cached
98764 pages dirty
0 pages writeback
18714 pages mapped
184886 pages slab
1140 pages pagetables
Out of memory: Killed process 4387, UID 48, (httpd).

This is again showing that there is plenty of RAM available(6GB), nearly no swap space used and the only thing reporting as empty is the DMA32 ("DMA32: empty"). I am not too sure how the DMA32 can get empty on its own like this however.

The server seems to have locked up, so I was unable to run the free command but you can see the memory usage from the above OOM message.
The only way I can see the swap space causing the problem which would make sense is in that the disks are struggling that much it is not able to write out quick enough.
I am going to check the RAID controller settings on both these servers to check they have been set up identically but otherwise I am still not too sure how the DMA32 has got empty and is not able to use the swap space.


All times are GMT -5. The time now is 09:14 PM.