Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We are serving very busy site and needs to scale MySQL server which is currently running on CentOS 5.3(QuadCore/32GB/RAID10). Here are few thoughts:
1. Master-Slave: Site is sensitive about delays and any delay (which is a possibility in master-slave/master-master setup) can hamper its functionality. So we are not looking at this option. Slave is there only for backups etc.
2. MySQL Cluster: same as above, in multiple nodes there's possibility of occurring delay. I'm not arguing for bad site design (as its not able to tolerate any delay) but its what we have to live with.
3. Horizontal scaling: That's the option, I'm considering here. We will partition DB in parts and deploy DB servers for each, say, all data related to product id 1-5k will be stored and served by DB1, product id from 5-10k will be stored and served by DB2 and so on.
Now DB can be scaled like this, but we need complete overhaul of application to distribute requests to different DB servers based on criteria (like based on product id).
What's the best way to implement it? I'm considering Memcached in one of DB server to store ProductID=DBServer mapping which application will use to get know which server have that particular product and then make/use connection of that DB server. But again, I am not aware how to provide some failover to Memcached?
or any other better way to scale DB server here? any other tool which might be helpful? Your suggestions will be highly helpful.
Can you provide more information about what sort of transaction volume you are experiencing and where you have measured bottlenecks?
Usually, the first point of failure in a system like this is the storage. What sort of drive infrastructure have you got? You specify Raid 10, but nothing more...
Thanks for reply. In current web server, we've two RAID.. RAID1 for Operating system and RAID10 for Database files mounted as /db
You may get some idea from mysql status below showing normal DB server traffic:
# mysqladmin status
Uptime: 423214 Threads: 14 Questions: 4194862055 Slow queries: 61018 Opens: 1985 Flush tables: 1 Open tables: 1585 Queries per second avg: 9911.917
Slow queries are quite high, just because we are marking all queries taking more than 1 sec as slow. There's no major bottleneck we are suspecting. The concern here is that Site is going to have 3x more traffic after few days and current Database server can't handle that much. So some sort of scaling is must.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.