Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I couldn't connect to my server last night while I was at work, so once I got home I checked to see if the server was still on and it was (Had a power outage last week that cause me some issues so I thought that could have been it). I did ps aux | more and saw multiple entries of apache2:
I have no idea why it's doing this, but I feel pretty sure it has something to do with my troubles last night. When I checked it before I rebooted, there were like 30 instances of it running. After reboot, it was only like 5, but the number seems to randomly increase the longer the computer is on. Any idea what's going on?
When Apache starts (as you can see its started as root), then drops privs to the named owner (www-data) and starts a given num of listeners, defined in httpd.conf as per the StartServers setting https://httpd.apache.org/docs/2.2/mo...l#startservers.
See also related directives like MaxClients etc on the same page.
When Apache starts (as you can see its started as root), then drops privs to the named owner (www-data) and starts a given num of listeners, defined in httpd.conf as per the StartServers setting https://httpd.apache.org/docs/2.2/mo...l#startservers.
See also related directives like MaxClients etc on the same page.
Ok I think I understand what I read there, but again this morning I came home and checked and saw the following:
Obviously that's a ridiculous number of processes and isn't normal. What I'm trying to figure out is what may be causing the issue. The only thing I can think of is that I do have a Glype proxy script running on a directory on the web server that a few of my coworkers (only 3 of them have the url) use occasionally to bypass the content filter at work. Does anyone know if there is some sort of bug where the child processes aren't being terminated after the client closes out of the script? Or could it be something else?
Like I said, check the (several) related directives eg MaxCLients; there's also one that dictates how many requests an instance serves before it shuts down.
I'd also check the access_log and error_log to see where the requests are coming from & why.
my httpd.conf doesn't have anything aside from servername localhost, so default MaxClients should be 256. What I don't understand is 1) Why are child processes not being terminated when someone uses glype proxy, 2) How reducing MaxClients (or any of the other settings) is going to impact anything to me site related. As far as I can imagine, the only thing I can think of is that at a certain point it's just going to stop allowing people to access the site at all because it will enter a queue until the child processes are terminated, which won't happen until I figure out why glype acts this way, manually go in and kill all of those processes myself, or I restart the server. There are only 3 people using the proxy on the website, and according to the glype logs they are solely using it to view content on youtube at this point, so I fail to see why I would have that many remaining child processes going unless it was some issue with glype. I guess I'll just remove glype until I can figure it out. Thanks for your help.
But why are you worried? Do you experience some performance problems with this server? If everything is running normally, why do you bother killing processes and restarting the server? I say, let apache do its job, if it needs 256 processes to do it, so be it.
Well the main reason is because I suspect it to be part of the reason why I was unable to connect to my server the other day. The modem was alive and when I got home everything appeared to be fine but I couldn't connect still. Checked processes and saw what I've already posted. After restarting, everything worked fine. Went to work and everything was fine until someone came to me saying it was no longer loading content from glype. Checked processes and boom 70 apache processes again. Restart fixed it again. The only thing Ive seen that correlates with the issue is the abundance of child processes that haven't been killed.
Im no expert by any stretch, but I equate it to how you need to delete objects in C++ or they occupy memory indefinitely unless the program exits.
From what I found on the net, Apache 2 will spawn and keep extra processes ready to service any new clients. How much resources it uses for this ought to be configurable, though I haven't gone through the details. You may also look into MPM worker module http://httpd.apache.org/docs/2.2/mod/worker.html to control thread spawning.
From what I found on the net, Apache 2 will spawn and keep extra processes ready to service any new clients. How much resources it uses for this ought to be configurable, though I haven't gone through the details. You may also look into MPM worker module http://httpd.apache.org/docs/2.2/mod/worker.html to control thread spawning.
Thanks, I will be looking into this tonight. On another note, I'm apparently not the only one with this issue ( http://www.netbuilders.org/web-proxi...lype-5860.html ) So I'm going to follow the thread and check out the worker module link and see where it all leads.
idofxeno, if you know for sure, you won't be serving like a humongous userbase. consider seriously the option of shifting to a lightweight and excellent for smaller sites webserver. nginx and lighttpd come to mind. They are very underrated. First thing to find would be if there are specific dependencies on Apache and how the alternatives can possibly satisfy those.
It is a matter to worry about if your system slows to a crawl and doesn't function even at least as a webserver itself.
Then impose a proper MaxClients limit.
count them as 5 per processor (you could also divide your memory size by the data space used by apache, then subtract 10. If the result is below 10, use 10.)
The only reason to allow apache to multi-thread is support concurrent connections. If that overloads the system, the only reasonable thing to do is limit the number threads to what the hardware will support.
256 might be reasonable for my system (8GB with 8 cores), but personally, 10 is enough.
It is a matter to worry about if your system slows to a crawl and doesn't function even at least as a webserver itself.
Running multiple instances of processes is not a reason for slowing down to crawl. Reason would be when these processes spikes cpu utilization and mem utilization. Usual situations that happens is when you have high traffic or bad code or combination of both. I have seen apache spawn mulitple instances even when there is little traffic and this behaviour has not caused the server to slow down. My server had xeon quad core cpu and 8GB ram. Basically you are looking at the wrong place for the reason for system's slow response.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.