Linux - KernelThis forum is for all discussion relating to the Linux kernel.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have some kind of memory related problem that invokes the
oom-killer. I hope that some kernel or memory management guru have
some ideas of what could be the cause and what I can do to solve the
problem!
The box is running Debian/Lenny 32 bit, which now also is
Debian/Stable. The CPU is "CPU0: Intel(R) Core(TM)2 Quad CPU Q9550 @
2.83GHz stepping 07" and the box has 8 GB of memory. The kernel is a
Debian standard "Linux big 2.6.26-1-686-bigmem #1 SMP Sat Jan 10
19:13:22 UTC 2009 i686 GNU/Linux". All software is up to date.
I have more or less daily problems with memory management, resulting
in that the oom-killer kicks in and kill some processes.
The box is very light loaded and the only memory deserving application
is VirtualBox. As I understand it there is always huge amounts of free
memory and I can't understand why the kernel doesn't agree with me...
When the oom-killer does it's job I get really extensive information
in /var/log/messages. I hope that someone more capable than me can
help me in analyzing the log and maybe help me understand what's going
on and what I can do to resolve the problem.
So far I have tried some memory management tuning by adding the
following rows to /etc/sysctl.conf
Mar 20 00:18:56 big -- MARK --
Mar 20 00:38:56 big -- MARK --
Mar 20 00:58:56 big -- MARK --
Mar 20 01:17:34 big kernel: [1591579.037846] gkrellm invoked oom-killer: gfp_mask=0x800d0, order=0, oomkilladj=0
Mar 20 01:17:34 big kernel: [1591579.037846] gkrellm invoked oom-killer: gfp_mask=0x800d0, order=0, oomkilladj=0
Mar 20 01:17:34 big kernel: [1591579.037852] Pid: 4514, comm: gkrellm Tainted: P 2.6.26-1-686-bigmem #1
Mar 20 01:17:34 big kernel: [1591579.037852] Pid: 4514, comm: gkrellm Tainted: P 2.6.26-1-686-bigmem #1
Mar 20 01:17:34 big kernel: [1591579.037871] [<c015fe1e>] oom_kill_process+0x4f/0x195
Mar 20 01:17:34 big kernel: [1591579.037871] [<c015fe1e>] oom_kill_process+0x4f/0x195
Mar 20 01:17:34 big kernel: [1591579.037887] [<c0160248>] out_of_memory+0x14e/0x17f
Mar 20 01:17:34 big kernel: [1591579.037887] [<c0160248>] out_of_memory+0x14e/0x17f
Mar 20 01:17:34 big kernel: [1591579.037900] [<c01621aa>] __alloc_pages_internal+0x2b8/0x34e
Mar 20 01:17:34 big kernel: [1591579.037900] [<c01621aa>] __alloc_pages_internal+0x2b8/0x34e
Mar 20 01:17:34 big kernel: [1591579.037911] [<c01aeb39>] proc_file_read+0x0/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037911] [<c01aeb39>] proc_file_read+0x0/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037916] [<c016224c>] __alloc_pages+0x7/0x9
Mar 20 01:17:34 big kernel: [1591579.037916] [<c016224c>] __alloc_pages+0x7/0x9
Mar 20 01:17:34 big kernel: [1591579.037921] [<c016225d>] __get_free_pages+0xf/0x1b
Mar 20 01:17:34 big kernel: [1591579.037921] [<c016225d>] __get_free_pages+0xf/0x1b
Mar 20 01:17:34 big kernel: [1591579.037925] [<c01aebad>] proc_file_read+0x74/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037925] [<c01aebad>] proc_file_read+0x74/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037935] [<c01aeb39>] proc_file_read+0x0/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037935] [<c01aeb39>] proc_file_read+0x0/0x1ff
Mar 20 01:17:34 big kernel: [1591579.037940] [<c01ab436>] proc_reg_read+0x58/0x6b
Mar 20 01:17:34 big kernel: [1591579.037940] [<c01ab436>] proc_reg_read+0x58/0x6b
Mar 20 01:17:34 big kernel: [1591579.037948] [<c01ab3de>] proc_reg_read+0x0/0x6b
Mar 20 01:17:34 big kernel: [1591579.037948] [<c01ab3de>] proc_reg_read+0x0/0x6b
Mar 20 01:17:35 big kernel: [1591579.037952] [<c017e88e>] vfs_read+0x81/0x11e
Mar 20 01:17:35 big kernel: [1591579.037952] [<c017e88e>] vfs_read+0x81/0x11e
Mar 20 01:17:35 big kernel: [1591579.037960] [<c017ecdf>] sys_read+0x3c/0x63
Mar 20 01:17:35 big kernel: [1591579.037960] [<c017ecdf>] sys_read+0x3c/0x63
Mar 20 01:17:35 big kernel: [1591579.037968] [<c0108853>] sysenter_past_esp+0x78/0xb1
Mar 20 01:17:35 big kernel: [1591579.037968] [<c0108853>] sysenter_past_esp+0x78/0xb1
Mar 20 01:17:35 big kernel: [1591579.037984] =======================
Mar 20 01:17:35 big kernel: [1591579.037984] =======================
Mar 20 01:17:35 big kernel: [1591579.037986] Mem-info:
Mar 20 01:17:35 big kernel: [1591579.037986] Mem-info:
Mar 20 01:17:35 big kernel: [1591579.037987] DMA per-cpu:
Mar 20 01:17:35 big kernel: [1591579.037987] DMA per-cpu:
Mar 20 01:17:35 big kernel: [1591579.037989] CPU 0: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037989] CPU 0: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037991] CPU 1: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037991] CPU 1: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037993] CPU 2: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037993] CPU 2: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037994] CPU 3: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037994] CPU 3: hi: 0, btch: 1 usd: 0
Mar 20 01:17:35 big kernel: [1591579.037996] Normal per-cpu:
Mar 20 01:17:35 big kernel: [1591579.037996] Normal per-cpu:
Mar 20 01:17:35 big kernel: [1591579.037998] CPU 0: hi: 186, btch: 31 usd: 151
Mar 20 01:17:35 big kernel: [1591579.037998] CPU 0: hi: 186, btch: 31 usd: 151
Mar 20 01:17:35 big kernel: [1591579.038000] CPU 1: hi: 186, btch: 31 usd: 117
Mar 20 01:17:35 big kernel: [1591579.038000] CPU 1: hi: 186, btch: 31 usd: 117
Mar 20 01:17:35 big kernel: [1591579.038002] CPU 2: hi: 186, btch: 31 usd: 166
Mar 20 01:17:35 big kernel: [1591579.038002] CPU 2: hi: 186, btch: 31 usd: 166
Mar 20 01:17:35 big kernel: [1591579.038003] CPU 3: hi: 186, btch: 31 usd: 172
Mar 20 01:17:35 big kernel: [1591579.038003] CPU 3: hi: 186, btch: 31 usd: 172
Mar 20 01:17:35 big kernel: [1591579.038005] HighMem per-cpu:
Mar 20 01:17:35 big kernel: [1591579.038005] HighMem per-cpu:
Mar 20 01:17:35 big kernel: [1591579.038007] CPU 0: hi: 186, btch: 31 usd: 49
Mar 20 01:17:35 big kernel: [1591579.038007] CPU 0: hi: 186, btch: 31 usd: 49
Mar 20 01:17:35 big kernel: [1591579.038009] CPU 1: hi: 186, btch: 31 usd: 140
Mar 20 01:17:35 big kernel: [1591579.038009] CPU 1: hi: 186, btch: 31 usd: 140
Mar 20 01:17:35 big kernel: [1591579.038010] CPU 2: hi: 186, btch: 31 usd: 26
Mar 20 01:17:35 big kernel: [1591579.038010] CPU 2: hi: 186, btch: 31 usd: 26
Mar 20 01:17:35 big kernel: [1591579.038012] CPU 3: hi: 186, btch: 31 usd: 163
Mar 20 01:17:35 big kernel: [1591579.038012] CPU 3: hi: 186, btch: 31 usd: 163
Mar 20 01:17:35 big kernel: [1591579.038015] Active:585080 inactive:487333 dirty:17 writeback:0 unstable:0
Mar 20 01:17:35 big kernel: [1591579.038015] Active:585080 inactive:487333 dirty:17 writeback:0 unstable:0
Mar 20 01:17:35 big kernel: [1591579.038017] free:794249 slab:186448 mapped:32269 pagetables:2309 bounce:0
Mar 20 01:17:35 big kernel: [1591579.038017] free:794249 slab:186448 mapped:32269 pagetables:2309 bounce:0
Mar 20 01:17:35 big kernel: [1591579.038020] DMA free:7592kB min:584kB low:728kB high:876kB active:0kB inactive:0kB present:16256kB pages_scanned:0 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038020] DMA free:7592kB min:584kB low:728kB high:876kB active:0kB inactive:0kB present:16256kB pages_scanned:0 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038023] lowmem_reserve[]: 0 1746 17748 17748
Mar 20 01:17:35 big kernel: [1591579.038023] lowmem_reserve[]: 0 1746 17748 17748
Mar 20 01:17:35 big kernel: [1591579.038028] Normal free:28260kB min:32180kB low:40224kB high:48268kB active:14124kB inactive:14044kB present:894080kB pages_scanned:53611 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038028] Normal free:28260kB min:32180kB low:40224kB high:48268kB active:14124kB inactive:14044kB present:894080kB pages_scanned:53611 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038031] lowmem_reserve[]: 0 0 64008 64008
Mar 20 01:17:35 big kernel: [1591579.038031] lowmem_reserve[]: 0 0 64008 64008
Mar 20 01:17:35 big kernel: [1591579.038035] HighMem free:3141144kB min:512kB low:74240kB high:147968kB active:2326196kB inactive:1935288kB present:8193024kB pages_scanned:0 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038035] HighMem free:3141144kB min:512kB low:74240kB high:147968kB active:2326196kB inactive:1935288kB present:8193024kB pages_scanned:0 all_unreclaimable? no
Mar 20 01:17:35 big kernel: [1591579.038038] lowmem_reserve[]: 0 0 0 0
Mar 20 01:17:35 big kernel: [1591579.038038] lowmem_reserve[]: 0 0 0 0
Mar 20 01:17:35 big kernel: [1591579.038042] DMA: 84*4kB 52*8kB 31*16kB 18*32kB 16*64kB 9*128kB 2*256kB 2*512kB 0*1024kB 1*2048kB 0*4096kB = 7584kB
Mar 20 01:17:35 big kernel: [1591579.038042] DMA: 84*4kB 52*8kB 31*16kB 18*32kB 16*64kB 9*128kB 2*256kB 2*512kB 0*1024kB 1*2048kB 0*4096kB = 7584kB
Mar 20 01:17:35 big kernel: [1591579.038050] Normal: 1695*4kB 3*8kB 2*16kB 0*32kB 1*64kB 36*128kB 17*256kB 6*512kB 3*1024kB 1*2048kB 1*4096kB = 28148kB
Mar 20 01:17:35 big kernel: [1591579.038050] Normal: 1695*4kB 3*8kB 2*16kB 0*32kB 1*64kB 36*128kB 17*256kB 6*512kB 3*1024kB 1*2048kB 1*4096kB = 28148kB
Mar 20 01:17:35 big kernel: [1591579.038058] HighMem: 41687*4kB 10581*8kB 881*16kB 17144*32kB 13508*64kB 6787*128kB 1470*256kB 197*512kB 12*1024kB 9*2048kB 21*4096kB = 3141268kB
Mar 20 01:17:35 big kernel: [1591579.038058] HighMem: 41687*4kB 10581*8kB 881*16kB 17144*32kB 13508*64kB 6787*128kB 1470*256kB 197*512kB 12*1024kB 9*2048kB 21*4096kB = 3141268kB
Mar 20 01:17:35 big kernel: [1591579.038067] 633828 total pagecache pages
Mar 20 01:17:35 big kernel: [1591579.038067] 633828 total pagecache pages
Mar 20 01:17:35 big kernel: [1591579.038069] Swap cache: add 37, delete 35, find 0/1
Mar 20 01:17:35 big kernel: [1591579.038069] Swap cache: add 37, delete 35, find 0/1
Mar 20 01:17:35 big kernel: [1591579.038071] Free swap = 8008256kB
Mar 20 01:17:35 big kernel: [1591579.038071] Free swap = 8008256kB
Mar 20 01:17:35 big kernel: [1591579.038073] Total swap = 8008392kB
Mar 20 01:17:35 big kernel: [1591579.038073] Total swap = 8008392kB
Mar 20 01:17:35 big kernel: [1591579.081640] 2293760 pages of RAM
Mar 20 01:17:35 big kernel: [1591579.081640] 2293760 pages of RAM
Mar 20 01:17:35 big kernel: [1591579.081640] 2064384 pages of HIGHMEM
Mar 20 01:17:35 big kernel: [1591579.081640] 2064384 pages of HIGHMEM
Mar 20 01:17:35 big kernel: [1591579.081640] 228611 reserved pages
Mar 20 01:17:35 big kernel: [1591579.081640] 228611 reserved pages
Mar 20 01:17:35 big kernel: [1591579.081640] 509557 pages shared
Mar 20 01:17:35 big kernel: [1591579.081640] 509557 pages shared
Mar 20 01:17:35 big kernel: [1591579.081640] 2 pages swap cached
Mar 20 01:17:35 big kernel: [1591579.081640] 2 pages swap cached
Mar 20 01:17:35 big kernel: [1591579.081640] 17 pages dirty
Mar 20 01:17:35 big kernel: [1591579.081640] 17 pages dirty
Mar 20 01:17:35 big kernel: [1591579.081640] 0 pages writeback
Mar 20 01:17:35 big kernel: [1591579.081640] 0 pages writeback
Mar 20 01:17:35 big kernel: [1591579.081640] 32269 pages mapped
Mar 20 01:17:35 big kernel: [1591579.081640] 32269 pages mapped
Mar 20 01:17:35 big kernel: [1591579.081640] 186473 pages slab
Mar 20 01:17:35 big kernel: [1591579.081640] 186473 pages slab
Mar 20 01:17:35 big kernel: [1591579.081640] 2309 pages pagetables
Mar 20 01:17:35 big kernel: [1591579.081640] 2309 pages pagetables
Last edited by Ralfredo; 03-20-2009 at 09:29 AM.
Reason: Added pastebin URL
How much memory have you allocated to guests in virtual box (during these OOM kills)? What are you looking at that makes you think there are huge amounts of free mem?
How much memory have you allocated to guests in virtual box (during these OOM kills)? What are you looking at that makes you think there are huge amounts of free mem?
Hi, thanks for your interest in trying to help me!
I have 1 GB allocated to the VM. The box has 8 GB of memory and around 1GB is used by other processes. I should have around 6 GB to play with. I also normally have no problem to start a couple of other VMs. But sometimes the oom-killer kicks in.
The info below is taken when one VM with 1GB is running. No memory tuning are added to /etc/sysctl.conf. Only default values are applied.
I don't immediately have an answer. From some quick poking around yesterday, I saw that this has happened because of a bug in a driver for one user's hardware. So when I have a few, I want to keep looking and hopefully have something helpful to add. There are some other threads about this here, too (in "similar threads").
Looks like a classic "lowmem" exhaustion - simplest solution is to run a 64-bit system. Linux (32-bit) wasn't designed to handle large memory systems - the 64-bit linear addressing works much better.
On 32-bit, having lots of free memory above 1 Gig is no help if the applications insist on allocating below. You are attaching the symptom, not the problem.
Looks like a classic "lowmem" exhaustion - simplest solution is to run a 64-bit system. Linux (32-bit) wasn't designed to handle large memory systems - the 64-bit linear addressing works much better.
Seems like 64-bit is the right solution, but for different reasons I rather avoid that at the moment.
Do anyone have any idea if "CONFIG_HIGHPTE=y" (kernel config parameter) maybe should help. I found some documentation saying "The VM uses one page of memory for each page table. For systems with a lot of RAM, this can be wasteful of precious low memory. Setting this option will put user-space page tables in high memory."
Not worth wasting your time trying to reclaim less than 7 Meg.
Now I understand what you are meaning, and no 7 MB isn't much to save. Only reason I mentioned CONFIG_HIGHPTE is that I found that recommendation in a thread/post there someone had problems similar to mine.
Anyway, I think my problems are in someway related to VirtualBox since the oom-killer situation started after I upgraded VirtualBox from 2.0.6 to 2.1.4. I have started (before I started this one) a thread at the VirtualBox forum, http://forums.virtualbox.org/viewtop...fce8320cc8e1e5, but so far no response. When I ran version 2.0.6 or lower I never had any oom-killer problems.
Even tough I understand that the preferred solution is to go to a 64-bit OS it feels strange if that is the only solution. I mean, people do run 32 bit Linux with rater much memory with success or don't they?
I should have had a look at that full log earlier - have you always run that many instances ?.
I found an old 32-bit server I had laying around unused in the office - 4 Gig RAM. Installed VBox 2.1.4 on Centos 5.2. and fired up a couple of guests.
Each instance contributes better than 100 Meg dirty private (non shared) storage below the 1 Gig boundary. One (non-X) image that was actually running chewed up 182 Meg (the other was stopped partway through an install). Multiply that by a few times, and it's easy to see storage becoming short.
First of all, I really appreciate that you are trying to help me. Really!
Quote:
Originally Posted by syg00
Given that you are a Debian user, does that imply you are using OSE ?.
No its the one under the "VirtualBox Personal Use and Evaluation License (PUEL)" Installed by aptitude after adding "deb http://download.virtualbox.org/virtualbox/debian lenny non-free" to /etc/apt/sources.list.
Quote:
Originally Posted by syg00
I should have had a look at that full log earlier - have you always run that many instances ?.
I found an old 32-bit server I had laying around unused in the office - 4 Gig RAM. Installed VBox 2.1.4 on Centos 5.2. and fired up a couple of guests.
Each instance contributes better than 100 Meg dirty private (non shared) storage below the 1 Gig boundary. One (non-X) image that was actually running chewed up 182 Meg (the other was stopped partway through an install). Multiply that by a few times, and it's easy to see storage becoming short.
Normally, when I have my oom-killer problems, only one instance is running. I have been running, if I remember correctly, as many as four instances, each with around 1 to 1.5 GB allocated to the guest. That has worked without any oom-killings, go figure...
Usually the oom-killer kicks in in the middle of the night with only one VM running. I upgraded VirtualBox to 2.1.4 on March 7. Before that I newer had any problems. After:
Code:
$ sudo zgrep 'Killed process' /var/log/messages*
/var/log/messages:Mar 23 01:19:32 big kernel: [215148.388898] Killed process 13341 (VirtualBox)
/var/log/messages:Mar 23 01:19:32 big kernel: [215148.388898] Killed process 13341 (VirtualBox)
/var/log/messages:Mar 23 01:19:33 big kernel: [215148.416920] Killed process 13412 (firefox-bin)
/var/log/messages:Mar 23 01:19:33 big kernel: [215148.416920] Killed process 13412 (firefox-bin)
/var/log/messages:Mar 23 01:19:35 big kernel: [215148.441370] Killed process 8874 (apache2)
/var/log/messages:Mar 23 01:19:35 big kernel: [215148.441370] Killed process 8874 (apache2)
/var/log/messages.1.gz:Mar 19 01:18:03 big kernel: [1447935.391583] Killed process 9992 (VirtualBox)
/var/log/messages.1.gz:Mar 19 01:18:03 big kernel: [1447935.391583] Killed process 9992 (VirtualBox)
/var/log/messages.1.gz:Mar 19 01:18:04 big kernel: [1447935.431522] Killed process 13205 (apache2)
/var/log/messages.1.gz:Mar 19 01:18:04 big kernel: [1447935.431522] Killed process 13205 (apache2)
/var/log/messages.1.gz:Mar 19 22:16:09 big kernel: [1574246.270236] Killed process 15779 (firefox-bin)
/var/log/messages.1.gz:Mar 19 22:16:09 big kernel: [1574246.270236] Killed process 15779 (firefox-bin)
/var/log/messages.1.gz:Mar 19 22:21:13 big kernel: [1574697.342575] Killed process 15665 (VirtualBox)
/var/log/messages.1.gz:Mar 19 22:21:13 big kernel: [1574697.342575] Killed process 15665 (VirtualBox)
/var/log/messages.1.gz:Mar 20 01:17:40 big kernel: [1591579.081641] Killed process 16939 (VirtualBox)
/var/log/messages.1.gz:Mar 20 01:17:40 big kernel: [1591579.081641] Killed process 16939 (VirtualBox)
/var/log/messages.1.gz:Mar 20 01:17:45 big kernel: [1591579.122579] Killed process 15169 (firefox-bin)
/var/log/messages.1.gz:Mar 20 01:17:45 big kernel: [1591579.122579] Killed process 15169 (firefox-bin)
/var/log/messages.1.gz:Mar 20 13:26:30 big kernel: [1667397.282821] Killed process 29030 (VirtualBox)
/var/log/messages.1.gz:Mar 20 13:26:30 big kernel: [1667397.282821] Killed process 29030 (VirtualBox)
/var/log/messages.1.gz:Mar 21 01:17:31 big kernel: [1734166.511986] Killed process 14507 (VirtualBox)
/var/log/messages.1.gz:Mar 21 01:17:31 big kernel: [1734166.511986] Killed process 14507 (VirtualBox)
/var/log/messages.2.gz:Mar 9 01:17:27 big kernel: [193049.583556] Killed process 16917 (VirtualBox)
/var/log/messages.2.gz:Mar 9 01:17:27 big kernel: [193049.583556] Killed process 16917 (VirtualBox)
/var/log/messages.2.gz:Mar 9 01:17:29 big kernel: [193049.641312] Killed process 4253 (firefox-bin)
/var/log/messages.2.gz:Mar 9 01:17:29 big kernel: [193049.641312] Killed process 4253 (firefox-bin)
/var/log/messages.3.gz:Mar 7 17:39:51 big kernel: [ 8357.910234] Killed process 5374 (VirtualBox)
/var/log/messages.3.gz:Mar 7 17:39:51 big kernel: [ 8357.910234] Killed process 5374 (VirtualBox)
/var/log/messages.3.gz:Mar 7 17:39:52 big kernel: [ 8357.932204] Killed process 6691 (winedevice.exe)
/var/log/messages.3.gz:Mar 7 17:39:52 big kernel: [ 8357.932204] Killed process 6691 (winedevice.exe)
As you can see, around 01:17 in the night is a popular time (I guess many serial killers like that time of the night .) This might be related to a cron-job that starts at 01:15 that basically does "/bin/ls -alR /". This machine has just over ten local file systems and six remote mounted over NFS (version 3). One theory is that this "big ls" which takes a couple of minutes, demands memory in a way that doesn't mix well with VirtualBox 2.1.4
One theory is that this "big ls" which takes a couple of minutes, demands memory in a way that doesn't mix well with VirtualBox 2.1.4
It will certainly fill up the filesystem cache, but that would be freed up immediately as needed by other new memory demands. Not to go off on a complete tangent, but what else is in that cron job? Is there any description of what it's purpose in life is?
It will certainly fill up the filesystem cache, but that would be freed up immediately as needed by other new memory demands. Not to go off on a complete tangent, but what else is in that cron job? Is there any description of what it's purpose in life is?
It just creates a kind of very simple "system history file".
It helps me answer questions like when did I install XXX or when was i stupid enough to remove YYY. The precision gets worse as time goes on, but often I find myself helped by having the history. Nothing I can't live without but helpful from time to time. Besides of burning quite a few CPU-cycles and massaging the disks a little, the script ought to be harmless.
BTW, I know a little smarter programming could avoid a couple of "cp", but hey the machine need to have the feeling of being useful even in the middle of the night
The files created looks like below, if that's not obvious:
While talking about cron jobs. The only other cron job that's not Debian standard running around 01:17 is as below. It only take's a few sconds (if even that) and I can't see that it should have anything to do with my problem, but who knows. Stranger things have happened...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.