Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am new here and generally new to Linux. We have a Debian server running apache2. Our load average used to be below 1.0, however this week it is now at anywhere between 5-10.
It is causing real big problems. It is our front end client facing HTTP server. We have coded PHP to turn away requests if the load average is above 2.0. Does anyone have any advice? If I disable the apache2 service, the load average drops back down to an acceptable level, re-enabling it rockets the load average back up.
I'm not extremely well versed in linux so be gentle!
We have coded PHP to turn away requests if the load average is above 2.0.
And who decided that ?. And on what basis. Looks like (at least) a 4 (hyper-)thread machine. Possibly real cores.
Is the (so called) "high" loadavg affecting service to your clients - if not, why are you turning them away ?.
And who decided that ?. And on what basis. Looks like (at least) a 4 (hyper-)thread machine. Possibly real cores.
Is the (so called) "high" loadavg affecting service to your clients - if not, why are you turning them away ?.
Thank you for you response, it is much apprieciated.
The server is a quad Xeon E5410 server with 4GB RAM. The decision to turn away clients at 2.0 load average was made before my time. Like I say, I'm new to linux and am learning.
Whilst I understand what you are saying, it still doesn't explain why the server had a low load average last week, but this week it's so high!?
What should we set the max load average to before turning clients away?
How high a tree would you like to be lynched from ?.
Bad metric to use IMHO - people who do tend to have came from classic Unix (not Linux) backgrounds. Loadavg has a different meaning here. Run this (when the loadavg is high) and post the output
Code:
top -b -n 1 | awk '{if (NR <=7) print; else if ($8 == "D") {print; count++} } END {print "Total status D: "count}'
One of the consequences of using sampled data. I'm sure innumerable PhD's have been earned analysing such conundrums.
To some extent it proves my point. Even with the Unix definition, you need (consistently) more than 4 runnable tasks to see any delay for CPU resources. Which is usually what "loadavg" is seen as. With your kit you can always run 4 tasks concurrently. If you did, you'd have no (CPU) delay, but a loadavg of at least 4.
Why don't you run that command in a loop with a sleep and see what you get over time. In the meantime, if you think the number you are currently using may have been set based on a single core machine, just bump it to 8. That may be closer to reality.
It is causing real big problems. It is our front end client facing HTTP server. We have coded PHP to turn away requests if the load average is above 2.0.
Hmm, does raise the question whether you would be having a problem if you weren't taking this measure, but
Quote:
If I disable the apache2 service, the load average drops back down to an acceptable level, re-enabling it rockets the load average back up.
Well, you are running quite a few instances of Apache and each one takes a few percent of CPU and that adds up. So, that's what you'd expect.
Quote:
Our load average used to be below 1.0, however this week it is now at anywhere between 5-10.
Something has changed. It might be a software update that has some extra 'feature', it might be something to do with hardware, it might the server facing a higher level of load and thus starting more instances of Apache. It might even be an attack attempt (doesn't slow loris do something vaguely like this?), in which case maybe you policy with dropping requests above a certain load level is quite good (maybe you could allow a higher load number, possibly).
But, if you can get some kind of clue as to what has changed, you'll be quite a bit further on.
try monitoring with collectl. it can be your friend! see my reply to "High Load Average, Low CPU, Low IO Wait" posted by "ordaolmayanadam" and which is a couple of dozen posts after (above) this one and I responded with much to text to retype and didn't want to start parallel threads. lots of options to get the data you're looking for...
-mark
Thanks for all your help. We managed to resolve this in the end by deleting both the access.log and ssl_access.log files. Neither was particularly big (255mb) but for some reason this dropped the load from 10ish, to 0.1#.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.