LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Server (https://www.linuxquestions.org/questions/linux-server-73/)
-   -   TOP command details (https://www.linuxquestions.org/questions/linux-server-73/top-command-details-657184/)

samiralmousawi 07-21-2008 09:06 AM

TOP command details
 
Here is my Issue:

By using the 'top' command, I monitor the memory usage on our Linux box. We have several heavy Java processes running. I take output of 'top' command (in a file) when server is healthy and after server memory usage gets very high (and eventually start to cause application issues)

I try to compare the results of the 'top' command output when server is healthy and unhealthy:
-the top part of the 'top' command output shows low memory free when system unhealthy (as expected)
-But the details of the 'top' command showing all the processes show almost similar details compared to 'top' output when system is healthy.

How can I know which process is filling up Memory usage?

I would like to know if there is a way to calculate the 'Mem' field details from by summing up the process details 'VIRT', 'RES' or other fields


Mem: 8164332k total, 3127924k used, 5036408k free, 473808k buffers


When I get the output of the 'top' command to a file then import into Excel and add up all VIRT column, they don't match the total of Memory used.

Samir

nx5000 07-21-2008 09:34 AM

You might get more details with this.
Either you have a package for your distro/arch or compile yourself (there is a link to the targz on the webpage above)

memstat -w

samiralmousawi 07-21-2008 10:48 AM

Is there another utility or script available that does not require compile and root access. I am surprised there is no out of box tools to show which process are consuming all the memory (cache or no cache)

samiralmousawi 07-21-2008 09:37 PM

Finding cluprit process
 
I am posting below the full output of 'top' command first when system was unhealthy (memory used almost 95%) and when system was started up after a reboot (healthy). Applications running are mainly Java based (but there might be other monitoring or utilities also). Even when we killed '-9' Java processes, memory used did not clear up (memory used was just above 8GB and after killing all java processes, it went down to 7GB. What was taking 7 GB?!!). Which process(es) is the bad one?

'top' command output when system is 'unhealthy':
====
top - 20:16:29 up 22:04, 5 users, load average: 0.06, 0.08, 0.05
Tasks: 144 total, 1 running, 143 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.8% us, 0.2% sy, 0.0% ni, 97.6% id, 0.4% wa, 0.0% hi, 0.1% si
Mem: 8164332k total, 8059596k used, 104736k free, 338772k buffers
Swap: 16777208k total, 0k used, 16777208k free, 5736756k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18484 www 15 0 6144 952 696 R 4 0.0 0:00.02 top
1 root 16 0 4752 556 460 S 0 0.0 0:01.92 init
2 root RT 0 0 0 0 S 0 0.0 0:00.03 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.04 migration/1
5 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1
6 root RT 0 0 0 0 S 0 0.0 0:00.03 migration/2
7 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2
8 root RT 0 0 0 0 S 0 0.0 0:00.01 migration/3
9 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/3
10 root 5 -10 0 0 0 S 0 0.0 0:00.05 events/0
..
13 root 5 -10 0 0 0 S 0 0.0 0:00.03 events/3
14 root 5 -10 0 0 0 S 0 0.0 0:00.00 khelper
15 root 15 -10 0 0 0 S 0 0.0 0:00.00 kacpid
70 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/0
71 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/1
72 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/2
73 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/3
74 root 15 0 0 0 0 S 0 0.0 0:00.12 khubd
101 root 20 0 0 0 0 S 0 0.0 0:00.00 pdflush
102 root 15 0 0 0 0 S 0 0.0 0:01.19 pdflush
105 root 12 -10 0 0 0 S 0 0.0 0:00.00 aio/0
103 root 15 0 0 0 0 S 0 0.0 0:00.13 kswapd1
104 root 15 0 0 0 0 S 0 0.0 0:00.31 kswapd0
..
252 root 23 0 0 0 0 S 0 0.0 0:00.00 kseriod
400 root 8 -10 0 0 0 S 0 0.0 0:00.00 kmirrord
423 root 15 0 0 0 0 S 0 0.0 0:00.45 kjournald
1692 root 6 -10 3604 448 364 S 0 0.0 0:00.01 udevd
2335 root 6 -10 0 0 0 S 0 0.0 0:00.00 kauditd
2473 root 15 0 0 0 0 S 0 0.0 0:00.00 emcpd
2474 root 15 0 0 0 0 S 0 0.0 0:00.00 emcpdefd
2475 root 19 0 0 0 0 S 0 0.0 0:00.00 emcprequestd
2648 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxAsyncIoDaemo
2649 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxResumeIoDaem
2650 root 15 0 0 0 0 S 0 0.0 0:00.00 MpxPeriodicCall
2651 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxGrDaemon
2652 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxProactiveDae
2653 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxDispatchDaem
2654 root 19 0 0 0 0 S 0 0.0 0:00.00 MpxTestDaemon
2727 root 5 -10 0 0 0 S 0 0.0 0:00.00 kmpathd/0
..
2758 root 15 0 0 0 0 S 0 0.0 0:00.04 kjournald
..
2761 root 15 0 0 0 0 S 0 0.0 0:00.26 kjournald
2762 root 15 0 0 0 0 S 0 0.0 0:00.01 kjournald
2763 root 15 0 0 0 0 S 0 0.0 0:00.22 kjournald
2764 root 15 0 0 0 0 S 0 0.0 0:01.97 kjournald
2765 root 15 0 0 0 0 S 0 0.0 0:02.34 kjournald
3151 root 15 0 2528 276 208 S 0 0.0 0:01.10 cpuspeed
3152 root 15 0 2528 272 204 S 0 0.0 0:00.01 cpuspeed
3153 root 15 0 2528 272 204 S 0 0.0 0:00.02 cpuspeed
3155 root 15 0 2528 272 204 S 0 0.0 0:00.25 cpuspeed
3469 root 16 0 3628 608 496 S 0 0.0 0:00.34 syslogd
3473 root 16 0 2536 376 300 S 0 0.0 0:00.00 klogd
3484 root 16 0 2548 304 204 S 0 0.0 0:00.10 irqbalance
3496 rpc 15 0 4748 576 456 S 0 0.0 0:00.00 portmap
3516 root 17 0 5800 748 624 S 0 0.0 0:00.00 rpc.statd
3548 root 16 0 20916 392 188 S 0 0.0 0:00.00 rpc.idmapd
3733 root 18 0 2540 448 360 S 0 0.0 0:00.00 acpid
3747 root 16 0 6092 1876 532 S 0 0.0 0:00.00 perl
3757 root 16 0 71116 2036 1484 S 0 0.0 0:00.00 cupsd
3794 root 15 0 21920 1268 856 S 0 0.0 0:01.48 sshd
3809 root 15 0 8712 828 656 S 0 0.0 0:00.00 xinetd
3827 ntp 15 0 18560 5296 4228 S 0 0.1 0:00.00 ntpd
3837 root 18 0 19544 1336 1020 S 0 0.0 0:00.00 vsftpd
3856 root 16 0 34996 2552 1092 S 0 0.0 0:00.00 sendmail
3864 smmsp 16 0 27788 1904 836 S 0 0.0 0:00.00 sendmail
3875 root 15 0 4176 348 268 S 0 0.0 0:00.00 gpm
4058 root 16 0 96100 5544 2604 S 0 0.1 0:00.16 hpsmhd
4059 root 16 0 18776 1204 884 S 0 0.0 0:00.00 rotatelogs
4061 root 18 0 18776 1200 880 S 0 0.0 0:00.00 rotatelogs
4070 root 16 0 57072 952 536 S 0 0.0 0:00.01 crond
4079 hpsmh 19 0 364m 4364 1280 S 0 0.1 0:00.00 hpsmhd
4129 xfs 16 0 10228 1736 808 S 0 0.0 0:00.01 xfs
4141 root 15 0 285m 9296 4884 S 0 0.1 0:04.03 ovcd
4149 root 16 0 93996 6000 4020 S 0 0.1 0:02.58 ovbbccb
4168 root 15 0 91508 8708 5720 S 0 0.1 0:05.28 opcmsga
4170 root 16 0 95364 6396 4696 S 0 0.1 0:00.11 ovconfd
4192 root 16 0 29788 7444 4964 S 0 0.1 0:00.05 opcacta
4195 root 16 0 112m 7828 5732 S 0 0.1 0:01.07 coda
4199 root 18 0 17304 6928 4524 S 0 0.1 0:00.03 opcmsgi
4202 root 15 0 17396 7136 4688 S 0 0.1 0:00.35 opcle
4205 root 15 0 19884 7680 5192 S 0 0.1 0:00.06 opcmona
4224 root 16 0 8912 420 292 S 0 0.0 0:00.00 atd
4243 dbus 16 0 9644 788 616 S 0 0.0 0:00.05 dbus-daemon-1
4346 root 16 0 2876 1856 980 S 0 0.0 0:00.00 p_ctmag
4466 root 16 0 2868 1824 956 S 0 0.0 0:00.01 p_ctmat
4501 root 16 0 8992 932 776 S 0 0.0 0:00.00 cups-config-dae
4512 root 15 0 20184 7396 1268 S 0 0.1 0:35.79 hald
4530 root 15 0 4820 1660 1396 S 0 0.0 0:00.01 mstragent
4623 root 16 0 15588 2680 1980 S 0 0.0 0:00.46 mstragent
4653 root 16 0 55776 8628 5328 S 0 0.1 0:13.80 tlmagent
4864 root 17 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4865 root 17 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4866 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4867 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4869 root 17 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4871 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4873 root 16 0 90144 2900 2316 S 0 0.0 0:00.03 gdm-binary
5567 root 16 0 102m 2508 1812 S 0 0.0 0:00.00 gdm-binary
5583 root 15 0 58056 11m 2676 S 0 0.1 0:01.59 X
5680 root 16 0 31312 2704 1772 S 0 0.0 0:00.00 dsmcad
5688 gdm 15 0 123m 11m 6740 S 0 0.1 0:00.55 gdmgreeter
5989 www 15 0 470m 135m 16m S 0 1.7 1:29.22 java
8118 www 18 0 52756 1100 904 S 0 0.0 0:00.00 startPortalServ
8120 www 25 0 1567m 432m 32m S 0 5.4 25:58.16 java
8217 www 18 0 52756 1088 904 S 0 0.0 0:00.00 startJMSServer.
8219 www 16 0 909m 150m 23m S 0 1.9 9:08.18 java
8276 www 18 0 52756 1100 904 S 0 0.0 0:00.00 startEjbServer.
8278 www 17 0 1648m 411m 36m S 0 5.2 43:43.24 java
6489 root 16 0 38124 2564 2008 S 0 0.0 0:00.05 sshd
6491 www 15 0 38284 1920 1288 S 0 0.0 0:00.09 sshd
6495 www 16 0 18456 1516 1124 S 0 0.0 0:00.02 sftp-server
16777 root 16 0 38124 2604 2036 S 0 0.0 0:00.01 sshd
16779 www 16 0 38256 1856 1240 S 0 0.0 0:00.12 sshd
16780 www 16 0 53976 1616 1204 S 0 0.0 0:00.11 bash
16825 root 16 0 38124 2604 2036 S 0 0.0 0:00.01 sshd
16827 www 15 0 38256 1860 1252 S 0 0.0 0:00.12 sshd
16828 www 15 0 53976 1620 1212 S 0 0.0 0:00.02 bash
17473 root 16 0 38124 2604 2036 S 0 0.0 0:00.02 sshd
17480 www 16 0 38124 1776 1204 S 0 0.0 0:00.00 sshd
17481 www 16 0 53976 1612 1204 S 0 0.0 0:00.01 bash
17726 root 16 0 38124 2620 2052 S 0 0.0 0:00.01 sshd
17728 epcs 15 0 38256 1880 1284 S 0 0.0 0:00.03 sshd
17729 epcs 17 0 2724 624 444 S 0 0.0 0:00.00 ksh
17762 root 18 0 72296 1304 1012 S 0 0.0 0:00.00 su
17763 root 15 0 2724 664 484 S 0 0.0 0:00.01 ksh
17785 root 16 0 38124 2620 2052 S 0 0.0 0:00.00 sshd
17788 epcs 15 0 38256 1880 1284 S 0 0.0 0:00.00 sshd
17789 epcs 15 0 2724 624 444 S 0 0.0 0:00.01 ksh
17822 root 17 0 72296 1304 1012 S 0 0.0 0:00.00 su
17823 root 16 0 2724 652 472 S 0 0.0 0:00.00 ksh
17848 root 16 0 73036 1280 768 S 0 0.0 0:00.00 crond
17850 root 18 0 5356 912 776 S 0 0.0 0:00.00 sh
17853 root 16 0 13384 1708 1336 S 0 0.0 0:00.00 perl
17861 root 18 0 3636 528 436 S 0 0.0 0:00.00 vmstat


'top' output when system is healthy, after reboot and app startup:
===

top - 08:57:03 up 2 days, 12:12, 1 user, load average: 0.08, 0.08, 0.05
Tasks: 125 total, 2 running, 123 sleeping, 0 stopped, 0 zombie
Cpu(s): 1.3% us, 0.1% sy, 0.0% ni, 98.2% id, 0.3% wa, 0.0% hi, 0.0% si
Mem: 8164332k total, 3232068k used, 4932264k free, 507500k buffers
Swap: 16777208k total,0k used, 16777208k free, 1036236k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1370 www 15 0 6144 936 696 R 2 0.0 0:00.01 top
1 root 16 0 4752 556 460 S 0 0.0 0:00.99 init
2 root RT 0 0 0 0 S 0 0.0 0:00.16 migration/0
3 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/0
4 root RT 0 0 0 0 S 0 0.0 0:00.12 migration/1
5 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/1
6 root RT 0 0 0 0 S 0 0.0 0:00.14 migration/2
7 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/2
8 root RT 0 0 0 0 S 0 0.0 0:00.08 migration/3
9 root 34 19 0 0 0 S 0 0.0 0:00.00 ksoftirqd/3
10 root 5 -10 0 0 0 S 0 0.0 0:00.14 events/0
11 root 5 -10 0 0 0 S 0 0.0 0:00.00 events/1
12 root 5 -10 0 0 0 S 0 0.0 0:00.01 events/2
13 root 5 -10 0 0 0 S 0 0.0 0:00.03 events/3
14 root 5 -10 0 0 0 S 0 0.0 0:00.01 khelper
15 root 15 -10 0 0 0 S 0 0.0 0:00.00 kacpid
70 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/0
71 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/1
72 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/2
73 root 5 -10 0 0 0 S 0 0.0 0:00.00 kblockd/3
74 root 15 0 0 0 0 S 0 0.0 0:00.12 khubd
101 root 20 0 0 0 0 S 0 0.0 0:00.00 pdflush
102 root 15 0 0 0 0 S 0 0.0 0:01.27 pdflush
105 root 12 -10 0 0 0 S 0 0.0 0:00.00 aio/0
103 root 25 0 0 0 0 S 0 0.0 0:00.00 kswapd1
104 root 25 0 0 0 0 S 0 0.0 0:00.00 kswapd0
106 root 5 -10 0 0 0 S 0 0.0 0:00.00 aio/1
107 root 5 -10 0 0 0 S 0 0.0 0:00.00 aio/2
108 root 5 -10 0 0 0 S 0 0.0 0:00.00 aio/3
252 root 23 0 0 0 0 S 0 0.0 0:00.00 kseriod
400 root 7 -10 0 0 0 S 0 0.0 0:00.00 kmirrord
423 root 15 0 0 0 0 S 0 0.0 0:01.08 kjournald
1705 root 6 -10 3604 444 364 S 0 0.0 0:00.01 udevd
2336 root 6 -10 0 0 0 S 0 0.0 0:00.00 kauditd
2472 root 15 0 0 0 0 S 0 0.0 0:00.00 emcpd
2473 root 15 0 0 0 0 S 0 0.0 0:00.00 emcpdefd
2474 root 19 0 0 0 0 S 0 0.0 0:00.00 emcprequestd
2646 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxAsyncIoDaemo
2647 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxResumeIoDaem
2648 root 15 0 0 0 0 S 0 0.0 0:00.00 MpxPeriodicCall
2649 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxGrDaemon
2650 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxProactiveDae
2651 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxDispatchDaem
2652 root 20 0 0 0 0 S 0 0.0 0:00.00 MpxTestDaemon
2725 root 5 -10 0 0 0 S 0 0.0 0:00.00 kmpathd/0
2726 root 5 -10 0 0 0 S 0 0.0 0:00.00 kmpathd/1
2727 root 5 -10 0 0 0 S 0 0.0 0:00.00 kmpathd/2
2728 root 8 -10 0 0 0 S 0 0.0 0:00.00 kmpathd/3
2756 root 15 0 0 0 0 S 0 0.0 0:00.14 kjournald
2757 root 15 0 0 0 0 S 0 0.0 0:00.00 kjournald
2758 root 15 0 0 0 0 S 0 0.0 0:00.00 kjournald
2759 root 15 0 0 0 0 S 0 0.0 0:00.74 kjournald
2760 root 15 0 0 0 0 S 0 0.0 0:00.01 kjournald
2761 root 15 0 0 0 0 S 0 0.0 0:00.59 kjournald
2762 root 15 0 0 0 0 S 0 0.0 0:05.38 kjournald
2763 root 15 0 0 0 0 S 0 0.0 0:05.11 kjournald
3151 root 15 0 2528 276 208 S 0 0.0 0:03.74 cpuspeed
3152 root 15 0 2528 272 204 S 0 0.0 0:03.33 cpuspeed
3153 root 15 0 2528 272 204 S 0 0.0 0:02.75 cpuspeed
3154 root 15 0 2528 272 204 S 0 0.0 0:07.42 cpuspeed
3469 root 16 0 3628 608 496 S 0 0.0 0:00.64 syslogd
3473 root 16 0 2536 376 300 S 0 0.0 0:00.00 klogd
3484 root 16 0 2548 304 204 S 0 0.0 0:00.05 irqbalance
3496 rpc 16 0 4748 588 464 S 0 0.0 0:00.00 portmap
3516 root 17 0 5800 748 624 S 0 0.0 0:00.00 rpc.statd
3548 root 16 0 20916 392 188 S 0 0.0 0:00.00 rpc.idmapd
3733 root 19 0 2540 448 360 S 0 0.0 0:00.00 acpid
3747 root 16 0 6092 1876 532 S 0 0.0 0:00.00 perl
3794 root 16 0 21920 1268 856 S 0 0.0 0:04.97 sshd
3809 root 15 0 8712 828 656 S 0 0.0 0:00.00 xinetd
3827 ntp 15 0 18560 5296 4228 S 0 0.1 0:00.01 ntpd
3837 root 18 0 19544 1336 1020 S 0 0.0 0:00.00 vsftpd
3856 root 16 0 34996 2552 1092 S 0 0.0 0:00.01 sendmail
3864 smmsp 15 0 27788 1908 840 S 0 0.0 0:00.00 sendmail
3875 root 15 0 4176 348 268 S 0 0.0 0:00.00 gpm
4058 root 15 0 96100 5544 2604 S 0 0.1 0:00.16 hpsmhd
4059 root 16 0 18776 1204 884 S 0 0.0 0:00.00 rotatelogs
4061 root 20 0 18776 1200 880 S 0 0.0 0:00.00 rotatelogs
4070 root 16 0 57072 952 536 S 0 0.0 0:00.15 crond
4079 hpsmh 18 0 364m 4364 1280 S 0 0.1 0:00.00 hpsmhd
4131 xfs 16 0 10228 1736 808 S 0 0.0 0:00.01 xfs
4161 root 15 0 285m 9628 4888 S 0 0.1 0:10.40 ovcd
4169 root 16 0 93996 6184 4020 S 0 0.1 0:06.91 ovbbccb
4188 root 15 0 92452 8960 5800 S 0 0.1 0:14.04 opcmsga
4190 root 16 0 96372 6424 4700 S 0 0.1 0:00.10 ovconfd
4212 root 16 0 28768 7440 4960 S 0 0.1 0:00.08 opcacta
4214 root 15 0 112m 7884 5736 S 0 0.1 0:02.83 coda
4219 root 18 0 17304 6928 4524 S 0 0.1 0:00.03 opcmsgi
4222 root 15 0 17396 7136 4688 S 0 0.1 0:00.74 opcle
4226 root 15 0 19884 7680 5192 S 0 0.1 0:00.12 opcmona
4244 root 16 0 8912 420 292 S 0 0.0 0:00.00 atd
4264 dbus 16 0 9644 788 616 S 0 0.0 0:00.07 dbus-daemon-1
4412 root 16 0 2876 1856 980 S 0 0.0 0:00.00 p_ctmag
4581 root 16 0 2868 1824 956 S 0 0.0 0:00.01 p_ctmat
4610 root 16 0 8992 932 776 S 0 0.0 0:00.00 cups-config-dae
4621 root 15 0 20184 7384 1272 S 0 0.1 1:38.28 hald
4638 root 15 0 4820 1660 1396 R 0 0.0 0:00.03 mstragent
4732 root 16 0 15312 2528 1980 S 0 0.0 0:00.10 mstragent
4770 root 16 0 55752 8556 5300 S 0 0.1 0:44.37 tlmagent
4973 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4974 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4975 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4976 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4978 root 21 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4981 root 18 0 2524 404 336 S 0 0.0 0:00.00 mingetty
4982 root 15 0 90144 2900 2316 S 0 0.0 0:00.03 gdm-binary
5662 root 16 0 102m 2508 1812 S 0 0.0 0:00.00 gdm-binary
5672 root 15 0 58056 11m 2676 S 0 0.1 0:05.28 X
5775 root 16 0 31312 2704 1772 S 0 0.0 0:00.00 dsmcad
5783 gdm 15 0 123m 11m 6740 S 0 0.1 0:00.62 gdmgreeter
6176 www 15 0 500m 143m 16m S 0 1.8 3:56.92 java
17994 www 18 0 52756 1100 904 S 0 0.0 0:00.00 startPortalServ
17996 www 16 0 1593m 289m 30m S 0 3.6 32:01.04 java
18095 www 18 0 52756 1088 904 S 0 0.0 0:00.00 startJMSServer.
18097 www 17 0 901m 151m 23m S 0 1.9 14:11.92 java
18386 www 18 0 52756 1100 904 S 0 0.0 0:00.00 startEjbServer.
18388 www 15 0 1586m 322m 34m S 0 4.0 60:01.31 java
29526 root 16 0 71116 2156 1592 S 0 0.0 0:00.00 cupsd
32292 root 17 0 73036 1280 768 S 0 0.0 0:00.00 crond
32295 root 18 0 5356 912 776 S 0 0.0 0:00.00 sh
32296 root 16 0 13384 1716 1344 S 0 0.0 0:00.00 perl
32305 root 16 0 3636 532 440 S 0 0.0 0:00.00 vmstat
1333 root 16 0 38124 2604 2036 S 0 0.0 0:00.02 sshd
1337 www 16 0 38256 1836 1240 S 0 0.0 0:00.00 sshd
1338 www 16 0 53976 1608 1204 S 0 0.0 0:00.02 bash

hardcorelinux 07-22-2008 05:33 AM

sort top output based on memory usage..outputs which you pasted here not sorted with memory usage.

Press M ( capital M) in intractive mode to sort top output by memory usage. Once you sort you will get the process on the top of TOP command output.

Or you can get the usage of particular process for example to find java process run following command

ps axu --sort:rss | grep "java"

samiralmousawi 07-22-2008 09:48 AM

I did sort by memory usage (not the one i posted above), but my problem is that even after kiilling '-9' the 4 heavy java processes, I still got memory usage at around 7GB (so what is consuming all the memory? which process(es)?Can anyone tell by looking at the top command results? or the top command does not really show everything going on?

hardcorelinux 07-23-2008 12:15 AM

If you are not able find memory usage from top use other ways like ps command or pmap command or Examp( if it work on your linux distro) . Exmap is a tool which allows you to see how much memory is in use by different processes.

www.berthels.co.uk/exmap/faq.html
www.berthels.co.uk/exmap


Exmap has documentation that explains its use and the meaning of the various values. In short:

Run exmap. It may be better to run exmap as root in order to get access to all memory.
The 'Processes' tab shows memory information about each process in the first listview. The values are:
VM - VIRT from top - not really useful.
Mapped size - the total of memory actually used by the process, both in RAM and swap. Note however that e.g. libraries are not swapped out to swap but simply discarded (and read again from the files if needed). This value includes even shared memory.
Resident size - mapped size, but only in RAM (without swap).
Sole mapped - memory used only by this process. For example if a process uses a shared library that no other process uses at the moment it is included here.
Writable - memory with the processes' private data. It is part of sole mapped that the process has already written to.
Effective - the effective values are an attempt to compute how much memory a process uses in practice. Effective mapped/resident are mapped/resident values adjusted for shared memory. Memory that is shared by more processes is equally divided among them, i.e. if 10 processes use a 10MB large shared library (and each of them really uses the whole library), the library is counted as 10MB in mapped/resident values but only as 1MB in effective values.
The second listview shows memory mappings for the selected process. There are memory-mapped files and binaries and also several special mappings:
[heap] - dynamically allocated memory (i.e. malloc etc.)
[stack] - the stack of the process
[anon] - anonymous mapping from the mmap() system call - they should usually be viewed the same like [heap] although with multithreaded applications some [anon] mappings may be another [stack] mappings
The remaining two listviews provide details about the mappings. For experts (see exmap documentation).
Exmap in practice: Run exmap (possibly as root, if you get messages about failures to open files and you need them). Sort processes by effective mapped size. Higher values are worse. Use the second listview to find out which file is possibly responsible or if the high memory usage comes from the process data ([heap], [anon] and [stack]). It usually cannot be changed which libraries are used, so the values that should be actually checked are writable and sole mapped columns - they are the memory actually used by the application itself (although a portion may come from calling library functions of course).


All times are GMT -5. The time now is 12:35 PM.