To many processes, Pls Help!
Hi All,
I got woken up today by my phone ringing with clients going mad about my server being down. When I finally managed to get in via SSH I found there weren't any runaway processes as I suspected, but over 300 processes putting the load average up to over 120. CPU usage was less than 10% and there was plenty of memory free. I need to limit the server to about 100 processes as it doesn't cope with many more. I did a few searches and came across info on limits.conf so I added the lines:- * soft nproc 20 * hard nproc 30 root hard nproc 50 When I did a test to see if this had worked by bombarding my server with requests (an easier method of testing would be uch appreciated) it seemed to make no difference at all. Do I have to restart something to have the changes come into effect? I've restarted apache and xinet. Is there a way I can set a total process limit for the system regardless of user? If so how do I do it and how do it effect the changes after I've made them. Thanks very much in advance. Just a few search strings i tried, to help people find this article in the future:- loading limits.conf changes limiting the number of processes combating dos attacks |
Update:-
I've now updated my apache httpd.conf to read:- StartServers 8 MinSpareServers 5 MaxSpareServers 8 MaxClients 40 MaxRequestsPerChild 1000 Instead of:- StartServers 8 MinSpareServers 5 MaxSpareServers 30 MaxClients 150 MaxRequestsPerChild 1000 Which should help. I run a lot of scripts so half the time a http process also results in a perl process. Any help on how I can reduce server load would be much appreciated. I'm pussled as to why the CPU and memory usage isn't that high, but the load is huge and the derver has slowed to a virtual halt. Still need help on my original questions. |
Quote:
The Load Averages are a look at how badly the system is backed up and not necessarily how much CPU is being burnt. The Linux kernel looks at the process table every 5 seconds to see how many processes it has a) running on the CPU (TASK_RUNNING) and b) how many it has runnable (TASK_RUNNABLE). These are then added together. Then, at the 1 minute mark, it divides the total number by the number of checks (12) to get you your 1 minute load average. A load average of 120 is *VERY* high. About 100 times what it should be on a single CPU machine. What's funny about situations like this is that it's usually the client that called that is causing the problem. All it takes to have what you have is a single CGI stuck in a loop that won't get off the processor or something spawning threads like mad or the like (though the threads will usually eat RAM like there's no tomorrow). My suggestion is to find who's breaking the system and break their hands. As for limits.conf -- I'm fairly sure that the entries in there are a "per user" kind of thing. In other words -- each user can have 30 processes. HTH NOTE: The math in the above explanation isn't entirely accurate, but serves conversation much better than the reality ... See sched.h for more information |
All times are GMT -5. The time now is 03:03 PM. |