Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Any ideas? This is making me a bit nervous as I just went live with this server yesterday. It ran for months in testing (but with only me using the system) and now it's been running for about 30 hours straight.
This is to do with the way linux memory management works, I believe. Memory is marked as available when no longer needed but not actually freed until another process requires it. So although it might show 15GB in use, it doesn't mean all that memory is actively being used and other processes can still take it.
Thanks for confirming that. I started to suspect that as I realized it was rather convenient that almost all memory was "used" and yet there was a nice little cushion and the system is functioning (apparently) well.
Is there a way I can check how much is actually used by programs so I can keep an eye on it.
The top row 'used' (85) value will almost always nearly match the top row mem value (90). Since Linux likes to use any spare memory to cache disk blocks (34).
The key used figure to look at is the buffers/cache row used value (46). This is how much space your applications are currently using. For best performance, this number should be less than your total (90) memory. To prevent out of memory errors, it needs to be less than the total memory (90) and swap space (9).
If you wish to quickly see how much memory is free look at the buffers/cache row free value (43). This is the total memory (90)- the actual used (46). (90 - 46 = 44, not 43, this will just be a rounding issue)
The only thing I still don't get is, it seems 10 GB of 16 GB are being used. That's better than all of it, but what individual programs are using it? If I do 'ps aux' as root, I get the output below. If I add up the VSZ numbers, it comes out to about 4,300,000 (4.3 GB?). Where is the other 6GB being used (assuming the VSZ number reflects MB of memory used).
I don't see anything out of order there. To keep yourself sane, just keep an eye on "active/inactive" ratio. The active shows you aren't heavily using the memory (at that time anyway). I also like to see that slabs aren't eating too much - just see if that increases too much over time.
As for what's using it there are a couple of ways of finding out. "top" can be sorted by mem%, if you have sysstat installed that'll show depending on kernel level - look for pidstat. Else there is /proc/<pid>/smaps - there's a perl script somewhere to show that in a sensible format. Might only be for a single pid IIRC.
The active/inactive and slabs seem about the seem right now, but I've been running "free -m" all night and this is what I see (with some block of time between each major change:
Basically, the free memory is getting smaller and smaller. I'm tempted to reboot right now while most users are inactive (and pray the system comes back up automatically since I'd rather not go to the office at midnight) to see what it looks like at the start and monitor it over the next few days.
Reading /proc/meminfo right now shows that most things that could go down have a little (expected at night time), but the "Dirty:" total has gone way up to about half a GB from just half a MB
At least I can sleep. I just wish I knew if the slowly decreasing "unused" memory reported by free is an actual problem. And if it is, was it there all along or was it caused by some new program I installed today. I added TeX (and many helper programs) an x-windows tool or two, several programming languages and added spamhaus filtering to sendmail. I also reduced the forklimit for spamassassin from 5 to 2 (at the recommendation of another site somewhere that suggested it runs fine with just 1 and limits potential resource usage spikes).
EDIT
Well, by the morning the stats look about the same... I'm starting to realize that this is probably normal. the inactive/active ratio is good and the slab is staying about the same.
I just wish I knew if the slowly decreasing "unused" memory reported by free is an actual problem
As always - "it depends". Most likely not. You'll find the free number will get to a low thresh-hold and stay there - cache will be freed in need. Start worrying when swap gets used and always increases (a small usage is o.k.). That may well indicate a problem and you will get your biggest consumer smacked by the OOM-killer. And it's nearly always the one thing you don't want killed.
But there's nothing to indicate a problem in the numbers we've seen so far.
Thanks for the reassurances. After monitoring the system further and, incidentally, installing the same Debian OS on another system, I'm seeing that these numbers are not a problem.
The second system I mentioned has similar hardware, but weaker and fewer CPUs and most importantly only 1 GB of memory instead of 16. There are no users are serious web serving going on since I only want it for rsyncing backups (and was thinking of using it as an emergency server in case the main goes down). Anyway, this thing is using a tiny bit of swap space (just 50 MB) and has very little free memory and I can notice major unresponsiveness while the files are rsyncing (which the first server didn't experience when I transferred from the old BSD server I was phasing out).
In any case... I think I just need to buy some memory for my backup server and relax a bit.
Unlike every other similar thread I've seen, this time I think something is seriously wrong.
Quote:
Am I really using 15.75 GB of physical memory?
Not quite. Some significant fraction of the 4954MB reported as cached and the 679MB reported as buffers is effectively free.
But that still leaves most of your memory not accounted for.
Quote:
Top isn't suggesting that anything is eating all the memory either:
Did you try telling it to sort so the largest values in the RES column are on top?
Quote:
Originally Posted by davidstvz
If I do 'ps aux' as root, I get the output below. If I add up the VSZ numbers, it comes out to about 4,300,000 (4.3 GB?). Where is the other 6GB being used (assuming the VSZ number reflects MB of memory used).
Actually the amount used by processes is less than or equal to the total of the RSS column, which is much less than the total of VSZ.
So something really doesn't add up here.
Quote:
Originally Posted by syg00
I don't see anything out of order there.
Can you explain why?
I'm especially confused about that "Inactive" number. I don't understand how Active and Inactive relate to the other measures of memory use. When memory is not Mapped nor Anonymous nor Slab nor Buffers nor Cache, what significant uses are left?
Linux uses lazy allocation. It's too expense to continually run the page queues when not needed - more so as the RAM size increases. If a page is no longer needed by a task, it doesn't necessarily go back on the free q straight away. New requests come off the free q - so the apparent usage is larger than the "real" usage.
Active/inactive is simply a LRU indicator (new/old, hot/cold - pick a terminology). If pages are needed to replenish the free list, they can be taken from (non-dirty) inactive as well as cache without the need for a write.
Well, the system has been running fine since Monday at noon. I rebooted on Tuesday night, but it was likely unnecessary. The memory usage was behaving about the same by the next morning.
This is what top reports when sorted by memory (interactive command: shift+M):
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.