Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Dear all,
I have a box using CentOS 4.x, and current I'm using the box for hosting approx 5 domains with total (approx) 15 domains. Some of them very dynamic (a lot of user interactions), some of them static (only content update), and one of them static but with heavy database processes.
I choose Apache2 as my web server with PHP as server side script.
Lately we've add another domain to serve file storing where we kept hundreds of files for user to download, and it ate a lot of our bandwidth and worse, it ate our CPUs too because Apache instance will run for some time.
My question is can I assign another (web or download) server service to deal with this download so it won't burden my Apache? And how can I limit user's download section? For once I have some user that uses Download Accelelator and it opens many connection for a long time
These are ideas - not tested- so I do not know how much impact they would have in your context. Mind you, I am not sure that the information that you give is really clear enough to be sure exactly what to suggest.
Use squid in reverse http mode: in some contexts, you can cache the outgoing traffic, and depending on the nature of the variability in outgoing traffic and how the URLs are set up, this can be helpful
Apache, isn't exactly the lightest weight web server, and changing to something lighter could help.
Quote:
Lately we've add another domain to serve file storing
...well, you could do that on separate hardware, but that may be what you are trying to avoid...
And anything involving a database is likely to chew up resources; is it really necessary to use a database for all the things that you are trying to do? And writing databases so that resources are used efficiently is a significant problem...so mostly people ignore the problem. This may, or may not, be setting you back.
Separate hardware will be wonderful, but as you said we are trying to avoid it. Cost issue...
The domain that heavy database processes is a unique domain.
It stores Gigs of data. It did search and data mining... Perhaps we should move it somewhere else too...
About limiting client's download session (and perhaps bandwidth), do you have any suggestion?
are the users downloading or uploading to the web server?
there is a module for apache called mod_limitipconn(http://dominia.org/djao/limitipconn2.html)
this is used to limit the number of connections are allowed for an ip
i've tested it some time ago to see if it works for educational purpose only, so i don't know how it will perform if you use it for massive downloads
So we can use squid to cache page that requested by user, eh? I'll take a look at that
Yes, this is quite a conventional thing to do, but whether it helps, or not, depends entirely on your access pattern.
If people request the same page, over and over again, then it can be very effective. OTOH, if all the requests are for unique pages, or the page URLS are all unique, it will be less helpful, maybe even negative.
Quote:
Yes, Apache isn't the lightest... Been considering to use Nginx or Lighthttpd....
I'm in exactly the same position, so can't offer any advice.
Quote:
Separate hardware will be wonderful, but as you said we are trying to avoid it. Cost issue...
...although it needn't be that much of a cost, so if you can't consider this from a cost point of view, maybe you are doing something wrong, I can't say...
Quote:
The domain that heavy database processes is a unique domain.
It stores Gigs of data. It did search and data mining... Perhaps we should move it somewhere else too...
Whether its on a separate domain, or not, doesn't really seem relevant. Whether its on the same disk or whether its on the same cpu seem more germane.
Quote:
About limiting client's download session (and perhaps bandwidth), do you have any suggestion?
I should have, but I've forgotten the bandwidth limiting ones. You could probably limit the number of connections from a particular IP with a bit of iptables trickery, but I've no idea whether that really helps in any way; I still haven't really understood the problem statement well enough.
@Arty: I'll take a look at mod_limitipconn, and open my mind for other Apache module(s) too.
@salasi:
Quote:
...although it needn't be that much of a cost, so if you can't consider this from a cost point of view, maybe you are doing something wrong, I can't say...
Quote:
Whether its on a separate domain, or not, doesn't really seem relevant. Whether its on the same disk or whether its on the same cpu seem more germane.
That's the idea, we have to think about another investment on new machine(s) and than keep the machine (rent a spot) on a data center.
And that's the cost issue I'm talking about. It's not my liberty to decide, but I'll talk about it with my team
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.