Quote:
Originally posted by cosmicperl
Any help on how I can reduce server load would be much appreciated. I'm pussled as to why the CPU and memory usage isn't that high, but the load is huge and the derver has slowed to a virtual halt. Still need help on my original questions.
|
You now know what escapes *many* *NIX users (including most of the ones that claim to be experts) -- load average has little to nothing to do with the percent of the CPU/memory being consumed ...
The Load Averages are a look at how badly the system is backed up and not necessarily how much CPU is being burnt. The Linux kernel looks at the process table every 5 seconds to see how many processes it has a) running on the CPU (TASK_RUNNING) and b) how many it has runnable (TASK_RUNNABLE). These are then added together. Then, at the 1 minute mark, it divides the total number by the number of checks (12) to get you your 1 minute load average. A load average of 120 is *VERY* high. About 100 times what it should be on a single CPU machine.
What's funny about situations like this is that it's usually the client that called that is causing the problem. All it takes to have what you have is a single CGI stuck in a loop that won't get off the processor or something spawning threads like mad or the like (though the threads will usually eat RAM like there's no tomorrow).
My suggestion is to find who's breaking the system and break their hands.
As for limits.conf -- I'm fairly sure that the entries in there are a "per user" kind of thing. In other words -- each user can have 30 processes.
HTH
NOTE: The math in the above explanation isn't entirely accurate, but serves conversation much better than the reality ... See sched.h for more information