Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Probably by analogy to the "free" command, some values reported, in case of "top" where separate PIDs are sub tasks they are already reported to the main Process bearing separate PID. There is a way to fine tune your "top" report in the manual page for top, it's very long, actually I have not even finished it yet, it requires more time and concentration, a user's luxury I don't presently have.
It is actually very difficult to "put a finger on" just how much memory a collection of processes is using. There are several reasons.
(1) All of the "memory" is virtual. But the amount of virtual memory does not directly correspond to the amount of physical resources being consumed to support that virtual memory.
(2) The virtual memory manager is "lazy." It'll keep pages lying around in real-memory. Ditto file-buffers and so forth. Unless there is actual pressure being exerted on the memory subsystem, pages won't be cleaned up or reclaimed.
(3) Much of the memory used is shared among several processes. If they're all running Java, then they're sharing (for example) the entire Java runtime engine. Even though the virtual-memory allocation cited for each process includes a tally of the shared segments, those shared segments are consuming the same physical-memory resources.
It is actually very difficult to "put a finger on" just how much memory a collection of processes is using. There are several reasons.
(1) All of the "memory" is virtual. But the amount of virtual memory does not directly correspond to the amount of physical resources being consumed to support that virtual memory.
(2) The virtual memory manager is "lazy." It'll keep pages lying around in real-memory. Ditto file-buffers and so forth. Unless there is actual pressure being exerted on the memory subsystem, pages won't be cleaned up or reclaimed.
(3) Much of the memory used is shared among several processes. If they're all running Java, then they're sharing (for example) the entire Java runtime engine. Even though the virtual-memory allocation cited for each process includes a tally of the shared segments, those shared segments are consuming the same physical-memory resources.
Thank you for the answer, it is very informative.
So you are saying I can't really calculate the number?
I need to find out how much memory do java processes consume at any given point
The value in the RES column for one process is a decent approximation of how much memory that process is using at the moment. So the first Java process you listed is using about 1.3GB
Quote:
java heap space is set to -Xmx1024M on the machine i am testing this on.
That has no relation to the numbers shown in top. I'm not sure it has much relation to anything meaningful. Just one of your Java processes has a virtual size of 1550MB almost all of which is probably heap. I don't know what is actually controlled by that -Xmx1024M, but it is hard to believe that Java process would have a virtual size of 1550MB if its heap were really limited in the way you seem to expect.
Quote:
all of the above java processes together consume (22.1 + 2.6 + 3.1)=27.8 which is ~28%
There is some overlap (shared pages counted in each) so the total is less than the sum of the parts and it is usually very hard to say how much less, but the SHR column gives an upper limit on the amount of sharing and it is very low in this example, so in your case the sharing is a minor factor (putting the total % at some unknown value from 27.3% to 27.8%)
For added confusion, the effective working set of a process often includes a significant amount of memory identified as "cache" in tools such a top. So the RES and mem% shown in top might significantly undercount what that process is really using, while the sharing might mean adding processes together would overcount the memory use. You don't know which effect is larger so you don't know whether the total is more or less than the 28% it seems to be. If total system memory pressure is low, the fraction of these processes that is in the cache rather than in their resident set may be very low and the 27.?% might be accurate. But if system memory pressure is high Linux cycles pages more rapidly between resident and cache, so these processes might be using significantly more than 28%.
Quote:
Does it mean that 1024*0.28 = 286.72M are being used out of 1024 available for java processes?
No. Because it is 28% (roughly) of something entirely unrelated to the 1024M heap you specified.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
Originally Posted by johnsfine
That has no relation to the numbers shown in top. I'm not sure it has much relation to anything meaningful.
-Xmx is defining the maximum heap size (here 1GB of virtual memory) so has a little relationship with the top VIRT column.
What is interesting is the first java process is using much more virtual memory than these 1 GB which means the JVM is using at least 550 MB of non-heap / native memory which is quite a lot. A large part of this virtual memory (1.3 GB) is not mapped out (i.e. active).
Possible explanations might be a very large number of classes created by the java application, a massive number of threads running concurrently, custom native methods, something else like a memory leak in the JVM ...
I would recommend running jconsole to monitor what is going on inside this JVM memory and also see how much of these 97% of CPU are used to process user methods vs garbage collecting.
Possible explanations might be a very large number of classes created by the java application, a massive number of threads running concurrently, custom native methods, something else like a memory leak in the JVM ...
I would recommend running jconsole to monitor what is going on inside this JVM memory and also see how much of these 97% of CPU are used to process user methods vs garbage collecting.
Good answer. (Better than my answer on that aspect of the topic).
Quote:
Originally Posted by Poki
I need to find out how much memory do java processes consume at any given point
java heap space is set to -Xmx1024M on the machine i am testing this on.
When you say "how much memory" I understand that to mean "how much physical memory" and that is the question I answered at length above. Maybe you mean "how much virtual memory" or "how much heap". Those are different questions. You see jlliagre identified reasons that virtual memory use might be far larger than heap. Then physical memory use is different from virtual.
I haven't used jconsole much, but I expect it could tell you quite a bit about heap state and use.
I expect your system has very low memory pressure and that is the reason the 1.3GB physical memory use of that task is such a large fraction of the 1.5GB virtual. (There are several possible reasons why the other two Java tasks have RES as a low fraction of VIRT even if memory pressure is low). But if memory pressure is high, then excess garbage collection would be the likely cause for the 1.3GB RES being so high and also the 97% CPU. So jlliagre's suggestion of investigating that task with jconsole is a good idea.
The value in the RES column for one process is a decent approximation of how much memory that process is using at the moment. So the first Java process you listed is using about 1.3GB
That has no relation to the numbers shown in top. I'm not sure it has much relation to anything meaningful. Just one of your Java processes has a virtual size of 1550MB almost all of which is probably heap. I don't know what is actually controlled by that -Xmx1024M, but it is hard to believe that Java process would have a virtual size of 1550MB if its heap were really limited in the way you seem to expect.
There is some overlap (shared pages counted in each) so the total is less than the sum of the parts and it is usually very hard to say how much less, but the SHR column gives an upper limit on the amount of sharing and it is very low in this example, so in your case the sharing is a minor factor (putting the total % at some unknown value from 27.3% to 27.8%)
For added confusion, the effective working set of a process often includes a significant amount of memory identified as "cache" in tools such a top. So the RES and mem% shown in top might significantly undercount what that process is really using, while the sharing might mean adding processes together would overcount the memory use. You don't know which effect is larger so you don't know whether the total is more or less than the 28% it seems to be. If total system memory pressure is low, the fraction of these processes that is in the cache rather than in their resident set may be very low and the 27.?% might be accurate. But if system memory pressure is high Linux cycles pages more rapidly between resident and cache, so these processes might be using significantly more than 28%.
No. Because it is 28% (roughly) of something entirely unrelated to the 1024M heap you specified.
Thanks for the reply, I will have to reread it several more times :-)
The site from which i took these params is a very busy site with a lot of traffic, and lot of cuncurrent threads.
When java server is configured -xms and -xmx are set.
Java server from time to time restarts itselft.
Need to find reason why it is doing so.
in syslogd see that java process is being killed from time to time, now java processes are the "heaviest" correct?
so trying to add more RAM to the server, trying to increase heap size, so that it will not spit "OutofMemory" exception.
Here i am just to trying to find out how can i know how much memory is java really using.
Good answer. (Better than my answer on that aspect of the topic).
When you say "how much memory" I understand that to mean "how much physical memory" and that is the question I answered at length above. Maybe you mean "how much virtual memory" or "how much heap". Those are different questions. You see jlliagre identified reasons that virtual memory use might be far larger than heap. Then physical memory use is different from virtual.
I haven't used jconsole much, but I expect it could tell you quite a bit about heap state and use.
I expect your system has very low memory pressure and that is the reason the 1.3GB physical memory use of that task is such a large fraction of the 1.5GB virtual. (There are several possible reasons why the other two Java tasks have RES as a low fraction of VIRT even if memory pressure is low). But if memory pressure is high, then excess garbage collection would be the likely cause for the 1.3GB RES being so high and also the 97% CPU. So jlliagre's suggestion of investigating that task with jconsole is a good idea.
Thanks for the reply, I will have to reread it several more times :-)
The site from which i took these params is a very busy site with a lot of traffic, and lot of cuncurrent threads.
When java server is configured -xms and -xmx are set.
Java server from time to time restarts itselft.
Need to find reason why it is doing so.
in syslogd see that java process is being killed from time to time, now java processes are the "heaviest" correct?
so trying to add more RAM to the server, trying to increase heap size, so that it will not spit "OutofMemory" exception.
Here i am just to trying to find out how can i know how much memory is java really using.
Not likely to help. You haven't shown enough info for me to be sure, but I think the system has fairly low memory pressure, so adding more ram won't help.
If you think something is failing due to lack of ram, you can test that more easily by adding swap space than by adding ram:
1) If you add swap space and the process that was failing now slows down instead of failing, then you needed more ram.
2) If you add swap space and the process that was failing now works fine, then you just needed the swap space.
3) If you add swap space and the process that was failing still fails then adding ram wouldn't help either.
In your case, my guess is (3) neither ram nor swap space will help.
Quote:
trying to increase heap size, so that it will not spit "OutofMemory" exception.
There is a decent chance that will fix it. We can't tell whether it is hitting the heap limit, but probably it is. If it is hitting that heap limit maybe that is the result of a memory leak or other bug, so increasing the limit will just delay the failure. But maybe it is hitting the limit because the work it is doing is really that big and it needs a larger heap.
Quote:
i am just to trying to find out how can i know how much memory is java really using.
We already covered how you can get a good approximation of how much physical memory it is using and how much virtual memory it is using. That is detailed and accurate enough for your purposes. If you want to know how much heap memory it is using, I hope/expect jconsole can tell you, but I don't recall those details.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
So you have a lot of concurrent threads, that explains the large non heap memory used.
Any error log with a clue about java restarting itself ? you mentioned OutOfMemoryException.
What application server is this ?
using what JVM ?
with what garbage collection algorithm ?
Did you try running jconsole ?
So you have a lot of concurrent threads, that explains the large non heap memory used.
Any error log with a clue about java restarting itself ? you mentioned OutOfMemoryException.
What application server is this ?
using what JVM ?
with what garbage collection algorithm ?
Did you try running jconsole ?
app server - Resin
garbagy collection algorith - default
jconsole - no , didn't run jconsole
jvm - didn't find out yet
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.