Slackware This Forum is for the discussion of Slackware Linux.
|
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
07-24-2006, 08:57 PM
|
#1
|
Member
Registered: Nov 2005
Distribution: Slackware 10.2 2.6.20
Posts: 68
Rep:
|
OOM-Killer woes
I'm looking for any advice/ insight/ experience/ diagnostic help with the oom-killer. Simple software upgrades are not working and I don't know enough about the OOM to figure this out on my own.
I'm running an Oracle 10gR2 10.2.0.2 database and a Java program on a server, and about every 4-5 days the OOM-Killer runs and usually kills the Java process.
I started with Slackware 10.2 under 2.6.13, and got the OOM, then I applied all the 10.2 patches, got the OOM, brought the Kernel up to 2.6.16.24, got the OOM, patched all the way to -current (I was hoping 11 would have been out by now), patched Oracle to 10.2.0.2, still get the OOM.
This machine is beefy, and the shear size of it could be causing problems.
It's a Dual Xeon 3.6Ghz with HT enabled, 6GB RAM and a nasty Intel SRCS16 (megaraid) (nasty because if a drive goes down, the WHOLE FREAKN 0+1 ARRAY GOES DOWN UNTIL REBOOT - but that's another story)
The only changes I've made to the kernel config is the Processor type (Pentium 4/ Xeon), SMP, SMT Scheduler, and HighMem 64G. I load all the modules with an Initrd. I use simple partitions for /boot, / and swap, and LVM for a pile of volumes. (6 x 250GB drives in the 0+1 array)
I have 24 G of swap, but I've only ever seen about 32 meg of that get used.
My feeling is that it has something to do with 'slab', as in slabtop, but I don't know what to monitor there.
After the upgrade to 2.6.16.24, I was able to echo 3> /proc/sys/vm/drop_caches after an OOM event and get about 2.5 GB back.
I can tell you all the gory details about the setup if you want, it's all documented.
I'd really like to stick with Slack for this project, but I may have to go to Oracle Support on this one, which would probably mean a switch to RedHet/ SLES.
Here's a copy of the dmesg from a recent oom. I'm not even sure if this is a single event or multiples...
Code:
oom-killer: gfp_mask=0x84d0, order=0
[<c014480a>] out_of_memory+0xa6/0xd8
[<c01459ee>] __alloc_pages+0x2e3/0x308
[<c011706c>] pte_alloc_one+0x11/0x12
[<c014d062>] __pte_alloc+0x27/0xbf
[<c014ff7e>] __handle_mm_fault+0x2d4/0x34f
[<c011763a>] do_page_fault+0x1bb/0x642
[<c011747f>] do_page_fault+0x0/0x642
[<c010380b>] error_code+0x4f/0x54
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:55
cpu 1 hot: high 186, batch 31 used:6
cpu 1 cold: high 62, batch 15 used:57
cpu 2 hot: high 186, batch 31 used:29
cpu 2 cold: high 62, batch 15 used:60
cpu 3 hot: high 186, batch 31 used:55
cpu 3 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:6
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:185
cpu 1 cold: high 62, batch 15 used:10
cpu 2 hot: high 186, batch 31 used:133
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:131
cpu 3 cold: high 62, batch 15 used:12
Free pages: 89496kB (82264kB HighMem)
Active:1122474 inactive:198852 dirty:39 writeback:0 unstable:0 free:22374 slab:13617 mapped:927454 pagetables:193027
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:0kB present:16384kB pages_scanned:46 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 6640
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 6640
Normal free:3644kB min:3756kB low:4692kB high:5632kB active:8572kB inactive:8336kB present:901120kB pages_scanned:17572 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 46080
HighMem free:82264kB min:512kB low:6664kB high:12816kB active:4481324kB inactive:787072kB present:5898240kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 3*16kB 2*32kB 13*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3644kB
HighMem: 8422*4kB 4930*8kB 163*16kB 10*32kB 15*64kB 7*128kB 5*256kB 2*512kB 2*1024kB 0*2048kB 0*4096kB = 82264kB
Swap cache: add 26123, delete 25895, find 8523/11456, race 0+3
Free swap = 21932680kB
Total swap = 21967532kB
Free swap: 21932680kB
oom-killer: gfp_mask=0xd0, order=0
[<c014480a>] out_of_memory+0xa6/0xd8
[<c01459ee>] __alloc_pages+0x2e3/0x308
[<c0145a34>] __get_free_pages+0x21/0x41
[<c0196c5d>] proc_info_read+0x40/0x9c
[<c01614ef>] vfs_read+0x1c7/0x1cc
[<c0161841>] sys_read+0x51/0x80
[<c0102d09>] syscall_call+0x7/0xb
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:55
cpu 1 hot: high 186, batch 31 used:7
cpu 1 cold: high 62, batch 15 used:57
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:60
cpu 3 hot: high 186, batch 31 used:55
cpu 3 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:6
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:185
cpu 1 cold: high 62, batch 15 used:10
cpu 2 hot: high 186, batch 31 used:133
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:131
cpu 3 cold: high 62, batch 15 used:12
Free pages: 89496kB (82264kB HighMem)
Active:1122475 inactive:198851 dirty:39 writeback:0 unstable:0 free:22374 slab:13617 mapped:927454 pagetables:193027
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:0kB present:16384kB pages_scanned:46 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 6640
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 6640
Normal free:3644kB min:3756kB low:4692kB high:5632kB active:8576kB inactive:8332kB present:901120kB pages_scanned:17572 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 46080
HighMem free:82264kB min:512kB low:6664kB high:12816kB active:4481324kB inactive:787072kB present:5898240kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 3*16kB 2*32kB 13*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3644kB
HighMem: 8422*4kB 4930*8kB 163*16kB 10*32kB 15*64kB 7*128kB 5*256kB 2*512kB 2*1024kB 0*2048kB 0*4096kB = 82264kB
Swap cache: add 26123, delete 25895, find 8523/11456, race 0+3
Free swap = 21932680kB
Total swap = 21967532kB
Free swap: 21932680kB
1703936 pages of RAM
1474560 pages of HIGHMEM
145591 reserved pages
17592500 pages shared
228 pages swap cached
39 pages dirty
0 pages writeback
927454 pages mapped
13617 pages slab
193027 pages pagetables
Out of Memory: Kill process 21958 (runDiamond.sh) score 2432270 and children.
Out of memory: Killed process 21974 (java).
oom-killer: gfp_mask=0x84d0, order=0
[<c014480a>] out_of_memory+0xa6/0xd8
[<c01459ee>] __alloc_pages+0x2e3/0x308
[<c011706c>] pte_alloc_one+0x11/0x12
[<c014d062>] __pte_alloc+0x27/0xbf
[<c014ff7e>] __handle_mm_fault+0x2d4/0x34f
[<c011763a>] do_page_fault+0x1bb/0x642
[<c011747f>] do_page_fault+0x0/0x642
[<c010380b>] error_code+0x4f/0x54
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:55
cpu 1 hot: high 186, batch 31 used:7
cpu 1 cold: high 62, batch 15 used:57
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:60
cpu 3 hot: high 186, batch 31 used:55
cpu 3 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:6
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:156
cpu 1 cold: high 62, batch 15 used:10
cpu 2 hot: high 186, batch 31 used:133
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:131
cpu 3 cold: high 62, batch 15 used:12
Free pages: 89620kB (82388kB HighMem)
Active:1122486 inactive:198851 dirty:39 writeback:0 unstable:0 free:22405 slab:13617 mapped:927324 pagetables:193027
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:0kB present:16384kB pages_scanned:47 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 6640
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 6640
Normal free:3644kB min:3756kB low:4692kB high:5632kB active:8576kB inactive:8336kB present:901120kB pages_scanned:17638 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 46080
HighMem free:82388kB min:512kB low:6664kB high:12816kB active:4481368kB inactive:787068kB present:5898240kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 3*16kB 2*32kB 13*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3644kB
HighMem: 8453*4kB 4930*8kB 163*16kB 10*32kB 15*64kB 7*128kB 5*256kB 2*512kB 2*1024kB 0*2048kB 0*4096kB = 82388kB
Swap cache: add 26123, delete 25895, find 8523/11456, race 0+3
Free swap = 21932680kB
Total swap = 21967532kB
Free swap: 21932680kB
1703936 pages of RAM
1474560 pages of HIGHMEM
145591 reserved pages
17592465 pages shared
228 pages swap cached
39 pages dirty
0 pages writeback
890789 pages mapped
13617 pages slab
193027 pages pagetables
oom-killer: gfp_mask=0xd0, order=0
[<c014480a>] out_of_memory+0xa6/0xd8
[<c01459ee>] __alloc_pages+0x2e3/0x308
[<c0145a34>] __get_free_pages+0x21/0x41
[<c0196c5d>] proc_info_read+0x40/0x9c
[<c01614ef>] vfs_read+0x1c7/0x1cc
[<c0161841>] sys_read+0x51/0x80
[<c0102d09>] syscall_call+0x7/0xb
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:55
cpu 1 hot: high 186, batch 31 used:7
cpu 1 cold: high 62, batch 15 used:57
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:60
cpu 3 hot: high 186, batch 31 used:55
cpu 3 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:6
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:156
cpu 1 cold: high 62, batch 15 used:10
cpu 2 hot: high 186, batch 31 used:133
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:162
cpu 3 cold: high 62, batch 15 used:12
Free pages: 294344kB (287112kB HighMem)
Active:1071279 inactive:198851 dirty:39 writeback:0 unstable:0 free:73586 slab:13617 mapped:875260 pagetables:193027
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:0kB present:16384kB pages_scanned:47 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 6640
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 6640
Normal free:3644kB min:3756kB low:4692kB high:5632kB active:8576kB inactive:8336kB present:901120kB pages_scanned:17638 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 46080
HighMem free:287608kB min:512kB low:6664kB high:12816kB active:4275976kB inactive:787068kB present:5898240kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 3*16kB 2*32kB 13*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3644kB
HighMem: 56965*4kB 6236*8kB 210*16kB 11*32kB 16*64kB 7*128kB 5*256kB 2*512kB 2*1024kB 0*2048kB 0*4096kB = 287732kB
Swap cache: add 26123, delete 25897, find 8523/11456, race 0+3
Free swap = 21932688kB
Total swap = 21967532kB
Free swap: 21932688kB
oom-killer: gfp_mask=0xd0, order=0
[<c014480a>] out_of_memory+0xa6/0xd8
[<c01459ee>] __alloc_pages+0x2e3/0x308
[<c0145a34>] __get_free_pages+0x21/0x41
[<c0198cd3>] proc_file_read+0x74/0x2bd
[<c016baee>] sys_fstat64+0x31/0x36
[<c01614ef>] vfs_read+0x1c7/0x1cc
[<c0161841>] sys_read+0x51/0x80
[<c0102d09>] syscall_call+0x7/0xb
Mem-info:
DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
DMA32 per-cpu: empty
Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:55
cpu 1 hot: high 186, batch 31 used:8
cpu 1 cold: high 62, batch 15 used:57
cpu 2 hot: high 186, batch 31 used:30
cpu 2 cold: high 62, batch 15 used:60
cpu 3 hot: high 186, batch 31 used:55
cpu 3 cold: high 62, batch 15 used:58
HighMem per-cpu:
cpu 0 hot: high 186, batch 31 used:6
cpu 0 cold: high 62, batch 15 used:1
cpu 1 hot: high 186, batch 31 used:156
cpu 1 cold: high 62, batch 15 used:10
cpu 2 hot: high 186, batch 31 used:133
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:169
cpu 3 cold: high 62, batch 15 used:12
Free pages: 685436kB (678204kB HighMem)
Active:973496 inactive:198851 dirty:39 writeback:0 unstable:0 free:171360 slab:13617 mapped:777462 pagetables:193027
DMA free:3588kB min:68kB low:84kB high:100kB active:0kB inactive:0kB present:16384kB pages_scanned:47 all_unreclaimable? yes
lowmem_reserve[]: 0 0 880 6640
DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 880 6640
Normal free:3644kB min:3756kB low:4692kB high:5632kB active:8576kB inactive:8336kB present:901120kB pages_scanned:17638 all_unreclaimable? yes
lowmem_reserve[]: 0 0 0 46080
HighMem free:678636kB min:512kB low:6664kB high:12816kB active:3884904kB inactive:787068kB present:5898240kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 3588kB
DMA32: empty
Normal: 1*4kB 1*8kB 3*16kB 2*32kB 13*64kB 1*128kB 2*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3644kB
HighMem: 112623*4kB 24088*8kB 1753*16kB 31*32kB 19*64kB 8*128kB 5*256kB 2*512kB 2*1024kB 0*2048kB 0*4096kB = 678828kB
Swap cache: add 26123, delete 25897, find 8523/11456, race 0+3
Free swap = 21932688kB
Total swap = 21967532kB
Free swap: 21932688kB
1703936 pages of RAM
1474560 pages of HIGHMEM
145591 reserved pages
17583955 pages shared
216 pages swap cached
49 pages dirty
0 pages writeback
712430 pages mapped
13617 pages slab
191855 pages pagetables
Out of Memory: Kill process 12758 (oracle) score 554946 and children.
Out of memory: Killed process 12758 (oracle).
1703936 pages of RAM
1474560 pages of HIGHMEM
145591 reserved pages
17562485 pages shared
209 pages swap cached
0 pages dirty
1 pages writeback
711375 pages mapped
13617 pages slab
190231 pages pagetables
Out of Memory: Kill process 12703 (oracle) score 550851 and children.
Out of memory: Killed process 12703 (oracle).
1703936 pages of RAM
1474560 pages of HIGHMEM
145591 reserved pages
17482265 pages shared
204 pages swap cached
0 pages dirty
1 pages writeback
709528 pages mapped
13617 pages slab
188512 pages pagetables
Out of Memory: Kill process 30861 (oracle) score 547525 and children.
Out of memory: Killed process 30861 (oracle).
Thanks for any help.
*Edit: Even if the advice is to go back to a fresh 10.2 with 2.4.31, I'd even try that.
Last edited by Slim Backwater; 07-24-2006 at 09:00 PM.
|
|
|
07-25-2006, 03:42 AM
|
#2
|
Member
Registered: Jul 2004
Distribution: Void Linux, former Slackware
Posts: 498
Rep:
|
First I would track down the "leaking" proces(es) consuming most of the memory (ps or top occur) and try to figure out what's wrong with them.
Second you may in 2.6 kernels change OOM-killer behaviour and it may be even disabled actualy. I've added into rc.local on one of the reference servers following command:
Code:
# Disable virtual memory overcommit (OOM killer)
/sbin/sysctl -w vm.overcommit_memory=2
. See more in Linux documentation - sysctl/vm.txt .
Last edited by dunric; 07-25-2006 at 03:44 AM.
|
|
|
07-25-2006, 04:00 AM
|
#3
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,286
|
Quote:
Originally Posted by Slim Backwater
... and about every 4-5 days the OOM-Killer runs and usually kills the Java process.
|
Can't be all bad then ...
(sorry, couldn't resist)
Quote:
I have 24 G of swap, but I've only ever seen about 32 meg of that get used.
|
Odd, very odd.
Possibly a config issues with Oracle - hopefully someone else can advise there.
Personally I wouldn't be dicking around with OOM options - that is a symptom, not the problem itself. But I guess at least now you have the choice - a (possibliy bad choice) task killed, or the system slowing down to hell so nothing gets done ???.
|
|
|
All times are GMT -5. The time now is 11:59 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|