I investigated ulimit. These are our current ulimits:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 135168
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 135168
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Our box has 16 GB RAM. Right now ps (ps -ale --sort=-vsize) says (I added commas for readability - wish code would do that):
S UID PID PPID C PRI NI RSS SZ WCHAN TTY TIME CMD
S 48 3490 3484 1 75 0 7,861,380 2,071,971 stext ? 00:35:10 httpd
S 48 3569 3484 0 76 0 1,217,284 402,132 stext ? 00:05:51 httpd
S 0 3200 1 1 79 0 140,716 336,470 stext ? 00:26:26 java
S 48 3488 3484 0 76 0 364,932 199,448 stext ? 00:01:23 httpd
S 48 3571 3484 0 75 0 312,572 175,107 stext ? 00:01:46 httpd
RSS = resident set size, the non-swapped physical memory that a task has used (in kiloBytes).
SZ = approximate amount of swap space that would be required if the process were to dirty all writable pages and then be swapped out. This number is very rough!
The biggest httpd seems too big. Perhaps it allocated a bunch of memory and never freed it.
I think that the important ulimit options for us are:
-d The maximum size of a processís data segment
-l The maximum size that may be locked into memory
-v The maximum amount of virtual memory available to the shell
I'm thinking of trying a 2 GB limit on data segments with "ulimit -d 2000000".
e.g., see http://httpd.apache.org/docs/2.0/vhosts/fd-limits.html:
ulimit -S -n 100
we could start httpd with
ulimit -d 2000000
exec /usr/sbin/apachectl restart
then processes won't be able to exceed a 2 GB data segment.
For the system call, see http://linux.die.net/man/2/setrlimit
, and the underlying calls, "brk, sbrk - change data segment size" at