Memory depletion on server running Tomcat; can't find the cause
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Memory depletion on server running Tomcat; can't find the cause
Due to a ton of research I believe I now understand the output of ps, top, and free better than ever, and also have a relatively decent grasp on memory management (virtual address space, etc.) than I ever did before. With that being said, my server is super low on available memory and I can't make 1+1=2 on why it is. I suspect it's Tomcat/JVM (which I admittedly know precious little about). I am rebuilding this server (for a number of reasons) and plan to install 8GB but solving this mystery is key to supporting/promoting my design plans.
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 3608656 102696 37488 85944 1 1 3 4 5 4 0 0 99 0
0 0 3608656 102688 37488 85968 0 0 0 0 861 1403 0 0 100 0
0 0 3608656 102588 37488 85968 0 0 0 0 883 1332 0 0 99 0
0 0 3608656 103348 37488 85968 0 0 0 0 766 1286 0 0 100 0
0 0 3608656 103432 37492 85968 0 0 0 72 940 1428 0 0 100 0
I also ran a command I found elsewhere on this forum (ps -eo size,pid,user,cmd --sort -size | head -10) and it came back with a lot of data (too much to post) but my understanding is that the SZ value in ps is a rough figure and really only describes how much swap would be used if the entire process needed to be paged out at one time.
Tomcat runs in a number of private instances, so there's a separate startup script for each project. The default values were kept for the JVM heap sizes: Xms512M Xmx1536M
If I understand correctly, this means that when each Tomcat instance starts, it will grab 512MB of its allocated address space, and get up to 1.5GB or so more as the need arises. So at jump there are 512MB x however many instances I have installed sitting in virtual memory, but not mapped to physical memory. The mapping only begins when someone actually accesses one of the instances by using the associated webapp. Am I right so far?
So, I have very little memory left, I am swapping pretty hardcore, and even though I suspect it's the Tomcat/JVM stuff, it sure doesn't look like it from the memory tools. For that matter though it looks like "nothing" is using memory, or certainly not enough to cause such a low memory problem. The server was rebooted 24 days or so ago because it actually ran out of all virtual memory.
How do I solve this mystery? Am I using the wrong tools? Am I misunderstanding my tools? What can I do to track down the processes depleting my memory?
Everything below that init line is 0 across the board. So according to this (assuming I'm reading it correctly), 567MB or so of physical memory is in use by Java for my Tomcat instances. The largest of these looks to be a Java process that was spawned around the time of the last forced reboot. I guess I still don't understand how this translates into no memory on my system. I thought top showed all running processes and their associated resource utilization. This looks like out of all 278 services running, only 7 are using memory at all, yet I'm using almost all of it.
In some cases the page cache or the number of cached filesystem inodes or dentries may grow too large. When a previously idling process becomes active, and requires actual memory pages, the kernel must first evict dirty data to disk. To see if this is what happens for you, clear the caches and see what changes it makes to the memory use, and to the time it takes for a dormant Java activation to respond.
To flush all caches, run
Code:
sudo sh -c 'sync ; echo 3 > /proc/sys/vm/drop_caches'
Leftover caches are in active use.
If you run into this often, consider reducing dirty_background_ratio and dirty_ratio, via e.g.
Code:
sudo sh -c 'echo 1 > /proc/sys/vm/dirty_background_ratio'
I took a look at the linked thread (and another thread on memory management linked off of that one). So, based on the lessons from those posts (that top's reporting on individual process memory utilization is not completely accurate) it seems like I'm supposed to understand that in fact the java process memory usage is actually likely to be less than shown above because top is including shared libraries. This makes even less sense to me.
So, since this script takes into account the shared libraries, what I take from this is that Java is using the most actual physical memory (2.3GB). The rest of the reporting memory usage in top actually belongs to shared libraries. Does this seem right? So my memory issue lies pretty much with Java as I can't do anything about the memory being used by shared libraries?
Nominal Animal, I appreciate the suggestion to flush caches. I did some reading from a couple of other posts and forums (including http://www.scottklarr.com/topic/134/...e-from-memory/ and am a little nervous about performing this on a live system. Plus, isn't the cache in this case the same as the one reported in free -m? If so, I only have 89MB kept in cache, so it doesn't seem like I would gain much from this. Am I misunderstanding what this does?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.