Memory Leaks in CentOS 6.2
Hello all! I was hoping to get some assistance with a memory issue i've begun to notice on a new CentOS6.2 install.
The quickie - At a fresh idle shortly after booting, my memory usage is around 400-500mb including Apache, MySQL, etc. After a few hours or so, 2.5Gbs of ram are being used, and I cannot tell by what. And after enough time, it eats up all 8Gbs of ram.
The slow -
System specs - 2 x 160gb Sata2, Athlon X2, 8Gbs of ram. CentOS 6.2 - php 5.3.10 - MySQL 5.5
Linux hosting.#######.com 2.6.32-220.17.1.el6.x86_64 #1 SMP Wed May 16 00:01:37 BST 2012 x86_64 x86_64 x86_64 GNU/Linux
Upon a clean reboot, the memory load looks good.
top - 17:14:21 up 18 min, 1 user, load average: 0.01, 0.00, 0.00
Tasks: 117 total, 1 running, 116 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 97.6%id, 1.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7801724k total, 328804k used, 7472920k free, 20324k buffers
Swap: 10043384k total, 0k used, 10043384k free, 115112k cached
After server has been running, the memory usage is much higher than it should be.
top - 16:55:19 up 7:20, 1 user, load average: 0.00, 0.00, 0.00
Tasks: 134 total, 1 running, 133 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.2%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7801724k total, 2587900k used, 5213824k free, 128964k buffers
Swap: 10043384k total, 0k used, 10043384k free, 1933744k cached
One thing I must mention is that while I have the server up and running, there is not yet any web traffic to the sites as I have not yet finished everything. The only traffic I see are the usual attack attempts, and the google bots.
Ive been reading around, searching CentOS 6.2 memory leaks and the like, and while its been happening, everyone else has been able to tag the incident to something, I cannot seem to find a process that uses that much memory. Also, its weird that it just builds up over time. Its not an instant throw, and takes hours to consume all 8gbs.
Any insight at all would be helpful :)
It's just page cache (files read, filesystem internals like inodes cached, and so on), nothing to worry about.
I'm not sure why top counts it as used, though. It behaves the same on my Debian-derived Xubuntu, too. I normally look at /proc/meminfo myself, so I haven't paid a lot of attention to what memory statistics top reports. (On my workstation, top reported 3GB used out of 6GB RAM total, but actually 2.3GB of that was in page cache, inodes, and dentries, only 700MB was actually used by userspace.)
If you want to see the actual memory used by userspace applications and libraries, clear caches first:
I only recommend running it for diagnostic purposes since it will obviously cause a relative slowdown: immediately, because all unwritten data is written to disk at one go, and afterwards, because all new data must be read from disk, as nothing useful is cached in RAM. But, it never causes any damage, and the slowdown is just temporary.
Have a read of linuxatemyram for a little more background.
Nominal , syg; Thanks both for the reply :) Very useful information.
I was thinking about what you said Nominal, regarding the mem/disk writes, and it prompted me to check the 'messages' log. I noticed it was full of the following entry,
May 27 07:19:57 hosting kernel: [drm:atom_get_src_int] *ERROR* ATOM: fb read beyond scratch region: 1245188 vs. 16384
As far as I can tell, its a failing hdd. I will research more, but it almost seems as if the 2 are related. If anyone has ever experienced the same thing during drive death, I'd love to hear your story.
I left the server running over night, and top showed that usage was looking much better.
top - 07:16:50 up 14:20, 1 user, load average: 0.02, 0.01, 0.00
Tasks: 117 total, 1 running, 116 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7801724k total, 760920k used, 7040804k free, 151460k buffers
Swap: 10043384k total, 0k used, 10043384k free, 339320k cached
Also, Clearing the caches worked great, I noticed the usage was now ~200mb at idle =D
Thanks again for the assist!
Regarding the dropping of caches it's a bad idea, there is a reason why Linux is caching everything and thats for speed. If you would have memory leaks you would seems something like "OutOfMemory" error, otherwise thats just a normal behavior of the Linux kernel and should be left alone.
Caching in the memory suppose to help with speed and not doing too much I/O work on the disk. If you drop the caches, the system will have to re-read everything from the disk, which is slow, super slow compared to the RAM.
Thanks for your reply Robert!
The thing that worries me most about the error I posted
"May 27 07:19:57 hosting kernel: [drm:atom_get_src_int] *ERROR* ATOM: fb read beyond scratch region: 1245188 vs. 16384 "
is that it didnt start until a couple of days ago. I assumed it was hdd related mostly because I had just mounted another drive in the system, and I know it to be abit aged.
I dont mind dumping cache at the moment as im doing alot of testing on this box, but its good to know what it is exactly im dumping in that respect.
What I find even more peculiar, I have a nearly identical box next to it that does receive traffic, and it does not have this issue. It has different hardware, but the same CentOS 6.2 install with the latest basics, apache, php, mysql.
In the beginning was the worst. It seems to have died on its own? I had run top after login a couple of days ago a noticed nearly all of the 8gbs of ram were considered "used". While it seems fine now, im trying to find the point source of the issue.
So you have 2 issues.
2. memory: as per syg00's link to linuxatemyram, Linux will always attempt to use all your RAM, otherwise you've wasted money on unused RAM :)
The kernel virtual memory system is smart and will release memory (re-assign) on demand as required.
You only need to worry if you see a big performance hit (unlikely) or swap starts filling up ...
Whenever you use your system, the kernel will use free memory to cache files and filesystem details accessed. You can see this in practice if you run e.g. find /usr -type f >/dev/null , listing all files under /usr, but discarding the output. The first time you run it it takes quite a while, but if you run it again, the second run will be very fast.
This is a good thing. It means the kernel is just anticipating your future needs, without making any commitments. The memory is still completely free. The kernel just saves stuff it thinks might come handy later on in those free pages, but it is not reserving them or putting them away in any way. If an application requests memory, that RAM will be given to the application instead, always. No memory allocation will ever fail because of that caching. Caches are your friend: do not drop the caches just because you feel uneasy about them!
@chrism01 - Thanks for the link! Ive read about the message in forums before, but was never able to completely interpret. Now that I understand completely where the issue is coming from, I can take proper steps to patch or make other changes to keep the junk out of the log :)
And thanks to you both again for the memory input. I think its sinking in now :p I guess what was throwing me off was the fact that I came from limited VPS environments. In my head I was thinking "If it eats at least 1gb or more at idle, the same distro couldnt be run on a vps with 768mb of ram". It was a case of broken logic for lack of understanding linux memory usage completely.
I feel as if I have accomplished something by being better informed, and I cant thank you guys enough :)
Good to hear.
You're certainly not the first nor will be the last to misunderstand caching; that's why they wrote this http://www.linuxatemyram.com/ :)
|All times are GMT -5. The time now is 09:24 AM.|