LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 07-09-2010, 07:41 AM   #1
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Rep: Reputation: 0
Total used memory a lot higher than sum of RSS


Hello,

i got a RHEL 5.2 machine.

and i noticed that the total used memory(as shown in "free" command) is 3GB higher than
the sum of RSS size of all processes(collected using "ps aux").

i know that its not suppose to be exact because RSS counts multiple times shared memory (so actually it suppose to be higher than the used memory).

the server runs apache which runs a pretty heavy php program.

my suspicion is that its memory leak?
or is there another reason for that?

Thanks for the help.
 
Old 07-09-2010, 08:07 AM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
http://www.linuxatemyram.com/

The above URL gives a very simplified discussion of a complicated topic. I don't agree with all the details there (especially the relationship to swap).

But for your question, it is most of the answer.

Last edited by johnsfine; 07-09-2010 at 08:10 AM.
 
Old 07-10-2010, 01:55 AM   #3
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Original Poster
Rep: Reputation: 0
Thanks for the reply.

i know about the disk cache.
but on another machine more recently installed, it has the same amount of buffers and caches being used
and it has also has free memory that is not used by the caches or the applications.
also the first machine started using swap.

about the sum of all processes RSS , isnt that should be equal to the amount of caches/buffers being used?

*** i will post the exact data tomorrow, cause i cant now.

thanks.
 
Old 07-10-2010, 04:53 AM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by kazabubu View Post
but on another machine more recently installed, it has the same amount of buffers and caches being used
and it has also has free memory that is not used by the caches or the applications.
Maybe when I see the details, I'll have some idea what you mean by the above (the significance and/or the implied question of the above info).

Quote:
also the first machine started using swap.
That's most of what I disagree with in the LinuxAteMyRam page. Linux will use swap rather than take memory back from caching when it decides that doing so is better. That is usually an accurate choice and trying to stop it from doing so is generally a bad idea.

Quote:
about the sum of all processes RSS , isnt that should be equal to the amount of caches/buffers being used?
No.
 
Old 07-11-2010, 01:37 AM   #5
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Original Poster
Rep: Reputation: 0
Here are the data that i collected from the machines, notice that one machine (i will refer it as machine A) has 6GB of memory and the other (machine B)8GB.

--------------------
Machine A:

"free -m"

total used free shared buffers cached
Mem: 5938 2742 3195 0 487 1220
-/+ buffers/cache: 1034 4903
Swap: 2047 0 2047
--------
ps aux --sort rss | awk '{if($6 > 2000) print $0;}' :

Quote:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 24717 0.0 0.0 63024 2332 ? Ss Jun18 0:11 sendmail: accep ting connections
root 6383 0.0 0.0 283556 2360 ? SLl Jul01 3:46 /usr/sbin/colle ctd -C /etc/collectd.conf -f
dbus 3708 0.0 0.0 33648 2764 ? Ss Jun08 0:00 dbus-daemon --s ystem
root 3662 0.0 0.0 28808 2928 ? Ss Jun08 0:00 rpc.statd
smmsp 12715 0.0 0.0 57900 3592 ? S 05:00 0:02 /usr/sbin/sendm ail -FCronDaemon -i -odi -oem -oi -t
root 15404 0.0 0.0 30636 3600 ? S 04:02 0:07 /usr/bin/perl / usr/local/shl/apache_syslog
nscd 13308 0.0 0.0 171580 3680 ? Ssl 06:01 0:00 /usr/sbin/nscd
bsasson 14926 0.0 0.0 114384 3824 ? S 08:04 0:00 sshd: bsasson@p ts/0
68 4030 0.0 0.0 29620 4068 ? Ss Jun08 0:00 hald
root 14943 0.0 0.0 140412 4104 pts/0 S 08:04 0:00 su -
root 12704 0.1 0.0 141768 4608 ? S 05:00 0:19 crond
ntp 3899 0.0 0.0 23812 5452 ? SLs Jun08 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
root 14924 0.0 0.1 114384 6644 ? Ss 08:03 0:00 sshd: bsasson [ priv]
root 3961 0.0 0.1 160744 7920 ? S Jun08 0:09 /usr/bin/python /usr/sbin/osad --pid-file /var/run/osad.pid
root 23326 0.0 0.1 209924 11020 ? Ss Jun16 0:31 /usr/sbin/httpd
apache 15143 1.1 0.3 227556 22756 ? S 08:23 0:03 /usr/sbin/httpd
apache 15142 1.0 0.3 227556 23044 ? S 08:22 0:03 /usr/sbin/httpd
apache 15117 1.3 0.3 228580 24200 ? S 08:21 0:05 /usr/sbin/httpd
apache 15144 1.4 0.4 229604 24764 ? S 08:24 0:03 /usr/sbin/httpd
apache 15008 1.4 0.4 229604 25264 ? S 08:09 0:14 /usr/sbin/httpd
apache 14907 1.1 0.4 229604 25268 ? S 08:02 0:17 /usr/sbin/httpd
apache 14482 1.1 0.4 229604 25280 ? S 07:52 0:25 /usr/sbin/httpd
apache 15029 1.7 0.4 229604 25280 ? S 08:11 0:17 /usr/sbin/httpd
apache 14889 1.3 0.4 229604 25320 ? S 08:00 0:22 /usr/sbin/httpd
apache 15009 1.3 0.4 229604 25340 ? S 08:10 0:14 /usr/sbin/httpd
apache 14481 1.2 0.4 229972 25660 ? S 07:52 0:26 /usr/sbin/httpd
apache 14438 1.4 0.4 230056 25728 ? S 07:47 0:34 /usr/sbin/httpd
apache 14427 1.3 0.4 230628 26312 ? S 07:46 0:33 /usr/sbin/httpd
apache 14474 1.4 0.4 230628 26340 ? S 07:51 0:30 /usr/sbin/httpd
apache 14428 1.4 0.4 230636 26348 ? S 07:46 0:36 /usr/sbin/httpd
apache 14393 1.2 0.4 231652 26436 ? S 07:42 0:32 /usr/sbin/httpd
apache 14250 1.1 0.4 231652 27208 ? S 07:23 0:44 /usr/sbin/httpd
apache 15415 1.3 0.4 233724 29416 ? S 04:02 3:40 /usr/sbin/httpd
apache 15408 1.3 0.4 233724 29480 ? S 04:02 3:40 /usr/sbin/httpd
apache 15411 1.3 0.4 234748 29556 ? S 04:02 3:31 /usr/sbin/httpd
apache 15502 1.3 0.4 234748 29704 ? S 04:02 3:36 /usr/sbin/httpd
apache 15503 1.3 0.4 234068 29800 ? S 04:02 3:39 /usr/sbin/httpd
100 13725 0.0 7.3 628704 445972 ? Ssl Jun16 17:48 memcached -d -p 11211 -u memcached -m 1024 -c 2048 -P /var/run/memcached/memcached.pid
------------------------------------------------------------------
Machine B:


total used free shared buffers cached
Mem: 7983 7935 47 0 428 4304
-/+ buffers/cache: 3201 4781
Swap: 2047 0 2047


ps aux --sort rss | awk '{if($6 > 2000) print $0;}'


Quote:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 3378 0.0 0.0 220004 2004 ? SLl Jul08 0:39 /usr/sbin/colle ctd -C /etc/collectd.conf -f
root 3319 0.0 0.0 63008 2304 ? Ss Jul08 0:01 sendmail: accep ting connections
root 18620 0.0 0.0 135220 2408 pts/1 S Jul09 0:00 su - opj73was
dbus 3127 0.0 0.0 35696 2804 ? Ss Jul08 0:00 dbus-daemon --s ystem
rpcuser 3075 0.0 0.0 30844 2944 ? Ss Jul08 0:00 rpc.statd
nscd 4955 0.0 0.0 172736 3452 ? Ssl 04:09 0:00 /usr/sbin/nscd
root 4778 0.0 0.0 30628 3576 ? S 04:02 0:04 /usr/bin/perl / usr/local/shl/apache_syslog
68 3481 0.0 0.0 29312 3784 ? Ss Jul08 0:00 hald
bmoisan 5001 0.0 0.0 114196 3812 ? S Jul08 0:00 sshd: bmoisan@p ts/2
bsasson 6217 0.0 0.0 114196 3840 ? S 08:03 0:00 sshd: bsasson@p ts/0
bmoisan 18481 0.0 0.0 114784 3908 ? S Jul09 0:00 sshd: bmoisan@p ts/1
root 18501 0.0 0.0 140296 4076 pts/1 S Jul09 0:00 su - opj73was
root 18524 0.0 0.0 140296 4080 pts/1 S Jul09 0:00 su -
root 6234 0.0 0.0 140396 4084 pts/0 S 08:04 0:00 su -
ntp 3274 0.0 0.0 23808 5444 ? SLs Jul08 0:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -g
root 4987 0.0 0.0 114196 6664 ? Ss Jul08 0:00 sshd: bmoisan [ priv]
root 6215 0.0 0.0 114196 6664 ? Ss 08:03 0:00 sshd: bsasson [ priv]
root 18479 0.0 0.0 114784 6668 ? Ss Jul09 0:00 sshd: bmoisan [ priv]
root 3406 0.0 0.0 158548 7644 ? S Jul08 0:00 python /usr/sbi n/osad --pid-file /var/run/osad.pid
root 3361 0.0 0.1 221400 11244 ? Ss Jul08 0:05 /usr/sbin/httpd
apache 6544 1.4 0.2 240056 23656 ? S 08:33 0:00 /usr/sbin/httpd
apache 6539 1.0 0.2 241080 24272 ? S 08:32 0:01 /usr/sbin/httpd
apache 6502 1.1 0.3 241080 24644 ? S 08:28 0:04 /usr/sbin/httpd
apache 6501 1.4 0.3 241080 24992 ? S 08:28 0:05 /usr/sbin/httpd
apache 6392 1.4 0.3 241080 25104 ? S 08:20 0:12 /usr/sbin/httpd
apache 6445 1.3 0.3 241080 25104 ? S 08:23 0:08 /usr/sbin/httpd
apache 6503 1.7 0.3 241080 25104 ? S 08:28 0:06 /usr/sbin/httpd
apache 6444 1.4 0.3 241080 25116 ? S 08:23 0:09 /usr/sbin/httpd
apache 6300 1.2 0.3 241080 25140 ? S 08:09 0:18 /usr/sbin/httpd
apache 6350 1.3 0.3 241080 25184 ? S 08:15 0:15 /usr/sbin/httpd
apache 6302 1.3 0.3 241092 25240 ? S 08:10 0:19 /usr/sbin/httpd
apache 6305 1.3 0.3 241080 25384 ? S 08:11 0:18 /usr/sbin/httpd
apache 6301 1.2 0.3 241092 25400 ? S 08:10 0:18 /usr/sbin/httpd
apache 6304 1.3 0.3 241080 25412 ? S 08:11 0:19 /usr/sbin/httpd
apache 6347 1.4 0.3 242104 26344 ? S 08:14 0:16 /usr/sbin/httpd
apache 5835 1.1 0.3 242104 26372 ? S 07:46 0:33 /usr/sbin/httpd
apache 5627 1.2 0.3 243140 27420 ? S 07:22 0:52 /usr/sbin/httpd
apache 5977 1.3 0.3 243500 27492 ? S 07:59 0:28 /usr/sbin/httpd
apache 5644 1.1 0.3 243140 27496 ? S 07:24 0:47 /usr/sbin/httpd
apache 5839 1.2 0.3 243140 27532 ? S 07:46 0:36 /usr/sbin/httpd
apache 5837 1.2 0.3 244152 27736 ? S 07:46 0:36 /usr/sbin/httpd
apache 4871 1.2 0.3 246572 30560 ? S 04:02 3:17 /usr/sbin/httpd
apache 4795 1.3 0.3 246224 30656 ? S 04:02 3:34 /usr/sbin/httpd
apache 4879 1.1 0.3 246224 30716 ? S 04:02 3:15 /usr/sbin/httpd
100 4154 0.0 4.1 405824 339252 ? Ssl Jul08 2:17 memcached -d -p 11211 -u memcached -m 1024 -c 2048 -P /var/run/memcached/memcached.pid
----------------------------------------------------------------------------

the same number of apache processes(they take most of the memory) are running on both machine and are more or less the same size.

but on machine A it seems that there is more space left.

the kernel version is:

machine A: 2.6.18-194.3.1.el5

machine B: 2.6.18-128.4.1.el5

Last edited by kazabubu; 07-11-2010 at 01:42 AM. Reason: adding more data
 
Old 07-11-2010, 05:43 AM   #6
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
On your machine B, the memory use excluding buffers and cache is 3201MB. Both the evidence from machine A and the evidence from the ps output imply that is 2GB higher than the amount used by all those processes.

So something else is using 2GB on machine B. I don't know how to discover what.

I don't have experience with virtual machines, nor with reserving memory for large pages. So I don't know how either of those might show up in the data you displayed. I can't think at the moment of other things that might be using your missing 2GB.
 
Old 07-11-2010, 08:34 AM   #7
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Original Poster
Rep: Reputation: 0
these are no virtual machines.

whats disturbing to me is why the 6GB memory machine has free memory at all (except the buffers/cache).
 
Old 07-11-2010, 09:36 AM   #8
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Original Poster
Rep: Reputation: 0
i found something new:

on machine B , the slab is very high (2GB), here is the info from /proc/meminfo:

MemTotal: 8175136 kB
MemFree: 48636 kB
Buffers: 285236 kB
Cached: 4412088 kB
SwapCached: 0 kB
Active: 2812252 kB
Inactive: 2900016 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 8175136 kB
LowFree: 48636 kB
SwapTotal: 2096472 kB
SwapFree: 2096332 kB
Dirty: 136 kB
Writeback: 0 kB
AnonPages: 1014968 kB
Mapped: 19104 kB
Slab: 2361912 kB
PageTables: 24856 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
CommitLimit: 6184040 kB
Committed_AS: 1329584 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 264136 kB
VmallocChunk: 34359473527 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
Hugepagesize: 2048 kB

i wonder if its normal, cause in the other machine its a lot smaller, only 100mb
 
Old 07-11-2010, 09:43 AM   #9
wagaboy
Member
 
Registered: Jun 2010
Distribution: Ubuntu 10.04, Cent OS 5.5, CLE3
Posts: 51

Rep: Reputation: 21
Quote:
Originally Posted by kazabubu View Post
Hello,

i got a RHEL 5.2 machine.

and i noticed that the total used memory(as shown in "free" command) is 3GB higher than
the sum of RSS size of all processes(collected using "ps aux").

i know that its not suppose to be exact because RSS counts multiple times shared memory (so actually it suppose to be higher than the used memory).

the server runs apache which runs a pretty heavy php program.

my suspicion is that its memory leak?
or is there another reason for that?

Thanks for the help.
This is not a memory leak--it's the way Linux is designed.

Linux uses 'optimistic memory allocation'. In this scheme, physical frames from the RAM are not allocated after malloc until the frames are actually referenced. So, if you malloc 100MB, for example, and never reverence it, it is never reflected in RSS, but shows up as virtual memory.
 
Old 07-11-2010, 11:28 AM   #10
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by kazabubu View Post
these are no virtual machines.
I didn't think they were. I was speculating that you might be running a virtual machine under it. But now you have more info, so both my guesses were off track.

Quote:
Originally Posted by kazabubu View Post
Slab: 2361912 kB
...
i wonder if its normal, cause in the other machine its a lot smaller, only 100mb
I don't think that is normal, so you should investigate further into that.

What do you get from
Code:
cat /proc/slabinfo

Last edited by johnsfine; 07-11-2010 at 11:31 AM.
 
Old 07-12-2010, 01:57 AM   #11
kazabubu
LQ Newbie
 
Registered: Jul 2010
Posts: 7

Original Poster
Rep: Reputation: 0
here is /proc/slabinfo:

Quote:
slabinfo - version: 2.1
# name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
fib6_nodes 9 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip6_dst_cache 10 12 320 12 1 : tunables 54 27 8 : slabdata 1 1 0
ndisc_cache 1 15 256 15 1 : tunables 120 60 8 : slabdata 1 1 0
RAWv6 11 12 960 4 1 : tunables 54 27 8 : slabdata 3 3 0
UDPv6 3 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
tw_sock_TCPv6 248 340 192 20 1 : tunables 120 60 8 : slabdata 17 17 0
request_sock_TCPv6 0 0 192 20 1 : tunables 120 60 8 : slabdata 0 0 0
TCPv6 59 76 1728 4 2 : tunables 24 12 8 : slabdata 19 19 0
nfs_direct_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
nfs_write_data 65 90 832 9 2 : tunables 54 27 8 : slabdata 8 10 0
nfs_read_data 109 120 768 5 1 : tunables 54 27 8 : slabdata 24 24 11
nfs_inode_cache 1305805 1312746 1040 3 1 : tunables 24 12 8 : slabdata 437582 437582 0
nfs_page 251 300 128 30 1 : tunables 120 60 8 : slabdata 10 10 60
fscache_cookie_jar 0 0 72 53 1 : tunables 120 60 8 : slabdata 0 0 0
rpc_buffers 8 8 2048 2 1 : tunables 24 12 8 : slabdata 4 4 0
rpc_tasks 208 230 384 10 1 : tunables 54 27 8 : slabdata 23 23 0
rpc_inode_cache 12 15 768 5 1 : tunables 54 27 8 : slabdata 3 3 0
ip_fib_alias 31 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
ip_fib_hash 31 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
jbd_1k 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
dm_mpath 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
jbd_4k 21 21 4096 1 1 : tunables 24 12 8 : slabdata 21 21 0
dm-tracked-chunk 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
dm-snapshot-in 0 0 112 34 1 : tunables 120 60 8 : slabdata 0 0 0
dm-snapshot-ex 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
dm_uevent 0 0 2608 3 2 : tunables 24 12 8 : slabdata 0 0 0
dm_tio 1410 1584 24 144 1 : tunables 120 60 8 : slabdata 11 11 0
dm_io 1411 1564 40 92 1 : tunables 120 60 8 : slabdata 17 17 30
scsi_cmd_cache 88 100 384 10 1 : tunables 54 27 8 : slabdata 10 10 0
sgpool-128 32 32 4096 1 1 : tunables 24 12 8 : slabdata 32 32 0
sgpool-64 32 32 2048 2 1 : tunables 24 12 8 : slabdata 16 16 0
sgpool-32 32 32 1024 4 1 : tunables 54 27 8 : slabdata 8 8 0
sgpool-16 32 32 512 8 1 : tunables 54 27 8 : slabdata 4 4 0
sgpool-8 138 195 256 15 1 : tunables 120 60 8 : slabdata 13 13 30
scsi_io_context 0 0 112 34 1 : tunables 120 60 8 : slabdata 0 0 0
ext3_inode_cache 92392 92415 760 5 1 : tunables 54 27 8 : slabdata 18483 18483 0
ext3_xattr 76 132 88 44 1 : tunables 120 60 8 : slabdata 3 3 0
journal_handle 144 144 24 144 1 : tunables 120 60 8 : slabdata 1 1 0
journal_head 248 320 96 40 1 : tunables 120 60 8 : slabdata 8 8 44
revoke_table 12 202 16 202 1 : tunables 120 60 8 : slabdata 1 1 0
revoke_record 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
uhci_urb_priv 2 67 56 67 1 : tunables 120 60 8 : slabdata 1 1 0
UNIX 60 132 704 11 2 : tunables 54 27 8 : slabdata 12 12 0
flow_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
msi_cache 13 59 64 59 1 : tunables 120 60 8 : slabdata 1 1 0
cfq_ioc_pool 45 168 160 24 1 : tunables 120 60 8 : slabdata 7 7 0
cfq_pool 43 144 160 24 1 : tunables 120 60 8 : slabdata 6 6 0
crq_pool 165 288 80 48 1 : tunables 120 60 8 : slabdata 6 6 0
deadline_drq 0 0 80 48 1 : tunables 120 60 8 : slabdata 0 0 0
as_arq 0 0 96 40 1 : tunables 120 60 8 : slabdata 0 0 0
mqueue_inode_cache 1 4 896 4 1 : tunables 54 27 8 : slabdata 1 1 0
isofs_inode_cache 0 0 608 6 1 : tunables 54 27 8 : slabdata 0 0 0
hugetlbfs_inode_cache 1 7 576 7 1 : tunables 54 27 8 : slabdata 1 1 0
ext2_inode_cache 0 0 720 5 1 : tunables 54 27 8 : slabdata 0 0 0
ext2_xattr 0 0 88 44 1 : tunables 120 60 8 : slabdata 0 0 0
dnotify_cache 5 92 40 92 1 : tunables 120 60 8 : slabdata 1 1 0
dquot 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
eventpoll_pwq 317 795 72 53 1 : tunables 120 60 8 : slabdata 15 15 0
eventpoll_epi 317 500 192 20 1 : tunables 120 60 8 : slabdata 25 25 0
inotify_event_cache 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
inotify_watch_cache 1 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
kioctx 0 0 320 12 1 : tunables 54 27 8 : slabdata 0 0 0
kiocb 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
fasync_cache 0 0 24 144 1 : tunables 120 60 8 : slabdata 0 0 0
shmem_inode_cache 331 405 768 5 1 : tunables 54 27 8 : slabdata 81 81 0
posix_timers_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
uid_cache 14 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
ip_mrt_cache 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
tcp_bind_bucket 405 560 32 112 1 : tunables 120 60 8 : slabdata 5 5 0
inet_peer_cache 1 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
secpath_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
xfrm_dst_cache 0 0 384 10 1 : tunables 54 27 8 : slabdata 0 0 0
ip_dst_cache 553 1020 384 10 1 : tunables 54 27 8 : slabdata 102 102 0
arp_cache 17 30 256 15 1 : tunables 120 60 8 : slabdata 2 2 0
RAW 9 10 768 5 1 : tunables 54 27 8 : slabdata 2 2 0
UDP 24 25 768 5 1 : tunables 54 27 8 : slabdata 5 5 0
tw_sock_TCP 14 80 192 20 1 : tunables 120 60 8 : slabdata 4 4 0
request_sock_TCP 19 30 128 30 1 : tunables 120 60 8 : slabdata 1 1 0
TCP 528 570 1536 5 2 : tunables 24 12 8 : slabdata 114 114 0
blkdev_ioc 45 268 56 67 1 : tunables 120 60 8 : slabdata 4 4 0
blkdev_queue 27 30 1576 5 2 : tunables 24 12 8 : slabdata 6 6 0
blkdev_requests 67 140 272 14 1 : tunables 54 27 8 : slabdata 10 10 13
biovec-256 27 27 4096 1 1 : tunables 24 12 8 : slabdata 27 27 0
biovec-128 47 48 2048 2 1 : tunables 24 12 8 : slabdata 24 24 0
biovec-64 87 88 1024 4 1 : tunables 54 27 8 : slabdata 22 22 0
biovec-16 87 90 256 15 1 : tunables 120 60 8 : slabdata 6 6 0
biovec-4 87 118 64 59 1 : tunables 120 60 8 : slabdata 2 2 0
biovec-1 275 606 16 202 1 : tunables 120 60 8 : slabdata 3 3 0
bio 525 660 128 30 1 : tunables 120 60 8 : slabdata 22 22 0
utrace_engine_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
utrace_cache 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
sock_inode_cache 739 786 640 6 1 : tunables 54 27 8 : slabdata 131 131 0
skbuff_fclone_cache 191 294 512 7 1 : tunables 54 27 8 : slabdata 42 42 0
skbuff_head_cache 941 1305 256 15 1 : tunables 120 60 8 : slabdata 87 87 30
file_lock_cache 3 66 176 22 1 : tunables 120 60 8 : slabdata 3 3 0
Acpi-Operand 1082 1239 64 59 1 : tunables 120 60 8 : slabdata 21 21 0
Acpi-ParseExt 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Parse 0 0 40 92 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-State 0 0 80 48 1 : tunables 120 60 8 : slabdata 0 0 0
Acpi-Namespace 348 448 32 112 1 : tunables 120 60 8 : slabdata 4 4 0
delayacct_cache 237 708 64 59 1 : tunables 120 60 8 : slabdata 12 12 0
taskstats_cache 4 106 72 53 1 : tunables 120 60 8 : slabdata 2 2 0
proc_inode_cache 1031 1080 592 6 1 : tunables 54 27 8 : slabdata 180 180 0
sigqueue 29 48 160 24 1 : tunables 120 60 8 : slabdata 2 2 0
radix_tree_node 99339 113610 536 7 1 : tunables 54 27 8 : slabdata 16230 16230 0
bdev_cache 28 35 768 5 1 : tunables 54 27 8 : slabdata 7 7 0
sysfs_dir_cache 5449 5500 88 44 1 : tunables 120 60 8 : slabdata 125 125 0
mnt_cache 35 75 256 15 1 : tunables 120 60 8 : slabdata 5 5 0
inode_cache 1137 1253 560 7 1 : tunables 54 27 8 : slabdata 179 179 0
dentry_cache 1423949 1498518 216 18 1 : tunables 120 60 8 : slabdata 83251 83251 0
filp 2390 3075 256 15 1 : tunables 120 60 8 : slabdata 205 205 0
names_cache 49 49 4096 1 1 : tunables 24 12 8 : slabdata 49 49 0
avc_node 14 53 72 53 1 : tunables 120 60 8 : slabdata 1 1 0
selinux_inode_security 1047 1344 80 48 1 : tunables 120 60 8 : slabdata 28 28 0
key_jar 38 60 192 20 1 : tunables 120 60 8 : slabdata 3 3 0
idr_layer_cache 104 105 528 7 1 : tunables 54 27 8 : slabdata 15 15 0
buffer_head 201641 201960 96 40 1 : tunables 120 60 8 : slabdata 5049 5049 0
mm_struct 120 120 896 4 1 : tunables 54 27 8 : slabdata 30 30 0
vm_area_struct 18177 18326 176 22 1 : tunables 120 60 8 : slabdata 832 833 0
fs_cache 115 590 64 59 1 : tunables 120 60 8 : slabdata 10 10 0
files_cache 116 135 768 5 1 : tunables 54 27 8 : slabdata 27 27 0
signal_cache 220 260 768 5 1 : tunables 54 27 8 : slabdata 52 52 0
sighand_cache 216 228 2112 3 2 : tunables 24 12 8 : slabdata 76 76 0
task_struct 232 232 1888 2 1 : tunables 24 12 8 : slabdata 116 116 0
anon_vma 1495 2304 24 144 1 : tunables 120 60 8 : slabdata 16 16 0
pid 230 649 64 59 1 : tunables 120 60 8 : slabdata 11 11 0
shared_policy_node 0 0 48 77 1 : tunables 120 60 8 : slabdata 0 0 0
numa_policy 110 432 24 144 1 : tunables 120 60 8 : slabdata 3 3 0
size-131072(DMA) 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-131072 0 0 131072 1 32 : tunables 8 4 0 : slabdata 0 0 0
size-65536(DMA) 0 0 65536 1 16 : tunables 8 4 0 : slabdata 0 0 0
size-65536 2 2 65536 1 16 : tunables 8 4 0 : slabdata 2 2 0
size-32768(DMA) 0 0 32768 1 8 : tunables 8 4 0 : slabdata 0 0 0
size-32768 6 6 32768 1 8 : tunables 8 4 0 : slabdata 6 6 0
size-16384(DMA) 0 0 16384 1 4 : tunables 8 4 0 : slabdata 0 0 0
size-16384 15 15 16384 1 4 : tunables 8 4 0 : slabdata 15 15 0
size-8192(DMA) 0 0 8192 1 2 : tunables 8 4 0 : slabdata 0 0 0
size-8192 19 19 8192 1 2 : tunables 8 4 0 : slabdata 19 19 0
size-4096(DMA) 0 0 4096 1 1 : tunables 24 12 8 : slabdata 0 0 0
size-4096 286 292 4096 1 1 : tunables 24 12 8 : slabdata 286 292 0
size-2048(DMA) 0 0 2048 2 1 : tunables 24 12 8 : slabdata 0 0 0
size-2048 1206 1250 2048 2 1 : tunables 24 12 8 : slabdata 625 625 0
size-1024(DMA) 0 0 1024 4 1 : tunables 54 27 8 : slabdata 0 0 0
size-1024 1100 1144 1024 4 1 : tunables 54 27 8 : slabdata 286 286 27
size-512(DMA) 0 0 512 8 1 : tunables 54 27 8 : slabdata 0 0 0
size-512 749 816 512 8 1 : tunables 54 27 8 : slabdata 102 102 0
size-256(DMA) 0 0 256 15 1 : tunables 120 60 8 : slabdata 0 0 0
size-256 1937 2040 256 15 1 : tunables 120 60 8 : slabdata 136 136 0
size-128(DMA) 0 0 128 30 1 : tunables 120 60 8 : slabdata 0 0 0
size-64(DMA) 0 0 64 59 1 : tunables 120 60 8 : slabdata 0 0 0
size-64 464978 634663 64 59 1 : tunables 120 60 8 : slabdata 10757 10757 0
size-32(DMA) 0 0 32 112 1 : tunables 120 60 8 : slabdata 0 0 0
size-128 1735 1920 128 30 1 : tunables 120 60 8 : slabdata 64 64 0
size-32 2799 3136 32 112 1 : tunables 120 60 8 : slabdata 28 28 0
kmem_cache 154 154 2688 1 1 : tunables 24 12 8 : slabdata 154 154 0
 
Old 07-12-2010, 02:18 AM   #12
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,131

Rep: Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121
2.6.16 introduced drop_caches which can be used to free unused dentries and inodes. Not sure how that'll work with NFS.
I have my doubts you'll get a lot of benefit at 2.6.18 - if the slab cache stays fragmented you may not see any benefit. A lot of work has gone into the slab (SLUB) allocator, but not so much at that kernel level I fear.
 
Old 07-12-2010, 05:22 AM   #13
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by kazabubu View Post
here is /proc/slabinfo:
That shows that most of the the memory whose use we're trying to understand is nfs_inode_cache.

That is completely outside my expertise. I don't know how that use can get that high. Even with no memory pressure, how are there so many of whatever it is that is being cached. If memory pressure occurs later, will this cache release memory?

Quote:
Originally Posted by syg00 View Post
2.6.16 introduced drop_caches which can be used to free unused dentries and inodes. Not sure how that'll work with NFS.
I have my doubts you'll get a lot of benefit at 2.6.18 - if the slab cache stays fragmented you may not see any benefit.
I assume that is a comment on the large usage by nfs_inode_cache. I don't understand much from that comment. Hopefully, kazabubu will understand whatever it is you are telling him. Is drop_caches a command you're suggesting he try?
 
Old 07-12-2010, 05:25 AM   #14
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
BTW kazabubu, in the future please use CODE tags rather than QUOTE tags around output like that. That will make the info more readable for those of us trying to help you. The slabinfo data is pretty badly formatted and hard to read even with CODE tags, so I decided against suggesting CODE tags to you earlier. But now I see it is even harder to read without CODE tags.
 
Old 07-12-2010, 06:13 AM   #15
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,131

Rep: Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121Reputation: 4121
Slab caches are outside the normally reported memory usages. Extreme examples like this led to a redesign.
drop_caches is a sysctl - see "man proc"

I wouldn't advise it's usage in a production environment normally - especially on such an old kernel. Later kernels incorporate significant changes that cause (better) consolidation of slabs, and the freeing of unused slabs. It's unlikely problems will occur, but I can see situations where memory consumption might not be helped - which could look like making things worse. Not a good position to put yourself in in a production environment.
It's possible (probable) Redhat rolled some of that back into their RHEL kernels, but I don't know that for sure.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] awk - sum total if field = string schneidz Programming 12 03-20-2010 04:56 PM
Higher memory consumption in RHEL 4 prakashmodi Linux - Enterprise 1 09-25-2008 08:30 AM
Monitoring Memory, VSZ or RSS disordr Linux - Server 1 09-08-2008 09:51 PM
sum total of file sizes allelopath Linux - General 3 01-07-2006 08:30 AM
Does kde use a lot of memory? codec Linux - General 5 10-28-2003 07:16 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 01:43 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration