Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
We run a website that is heavy on traffic and the one dedicated server we own gets really really slow on peak times.
So, we called the provider of our dedicated server and they answered that they do not offer "direct load balancing solutions" yet - they might in the future.
But as we cannot wait any longer, I wanted to ask if any of you server gurus out there have any clue on how to go about setting up a cluster of dedicated servers and if it is at all possible to just order (for example) 5 servers from the provider and setup them in "load balancing fashion".
Sorry for any nonsense written above but I am newbie when server administrating is the issue (we all are - bunch of developers actually), so any help/links would be greatly appreciated.
You can use a number of systems as open source load balancers under Linux, first up specifically for web traffic check out mod_proxy_balance for apache http://httpd.apache.org/docs/2.2/mod..._balancer.html This would normally be an independent box (or two for resilience) sitting in front of any number of apache boxes in the back. There are more generic tools like lvs http://www.linuxvirtualserver.org/ but the docs there are attrocious... This sort of thing is very simple for static content, if you do have sessions and synamic data and such then that can certainly cloud the issue in terms of real time data replication, but let's cross that if you need to later.
I don't know whether this helps (much) but you may need to give more details about what your web app is actually doing.
One situation I looked at a couple of years ago had hid the buffers and the developers spent some time fine tuning the database side of things to make performance 'aceptable'. It turned out ('though I didn't get chance to check this) that they were serving a limited set of pages from a database server (theoretically, they could be asked for a massive set; practically, 80++% were 'current data' not 'historic data from some arbitrary date range in this past'). This meant that while they were actually hitting a database server with a large number of time consuming requests, practically all of them were for the same pages.
With this application profile, they only would have needed to use squid in httpd accelerator mode to stop beating the database disk to death....
But the point is, that 'trick' was specific to their dataset and their usage profile and without knowing that you would have been very unlikely to spot that simple trick to get things under control.
You may be able to do something similar, but without knowing more it is completely impossible to say. (Although in your case, it is probable that a different trick would be required, it is still possible that a trick may be able to get you out of your present difficulty with a smaller investment than a fully-fledged load balancing solution...say, maybe, partitioning the load between two machines, or something.)