Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Company I work for has a webserver hosting a website based on PHP and MySQL, the service is located on a dedicated server in one of datacenters. My boss wants me to think of a solution of a problem when the server goes down. We also have another server in some other datacenter (in different location) that could possibly act as a backup. What would you recommend (from your own experience) me to do to have such failover with database replication ... so probably active backup?
Company I work for has a webserver hosting a website based on PHP and MySQL, the service is located on a dedicated server in one of datacenters. My boss wants me to think of a solution of a problem when the server goes down. We also have another server in some other datacenter (in different location) that could possibly act as a backup. What would you recommend (from your own experience) me to do to have such failover with database replication ... so probably active backup?
Any help would be appreciated
Maybe...such things are typically thorny problems, especially when WAN's are involved. You could set up heartbeat on the backup server, to fail the services over to the backup when contact with the primary is lost, which is (ideally) what you want. The problem arises in that it's on a WAN....if the cable at your datacenter going to that section of the WAN is disconnected, it fails over....when the server at the other end isn't down. Now you have TWO servers running the same site/database at the same time.
If you've got a fairly bulletproof network, and your WAN speed is good, I'd set up heartbeat to monitor things. Based on the MySQL database, I'd also make sure to have it replicated SOMEHOW (dump file copied to the backup server a few times a day? MySQL cluster? ), and have a script ready to import that data to the backup server's database, before it brings up the web engine. Obviously, copying the PHP/web pages would have to be done on a daily basis too, so that any changes to the production server are mirrored.
In an IDEAL world, you'd have all of this on a SAN, and have it replicated between locations, and just have to mount the drive.
Thanks for the reply TB0ne! Yeah, I thought of heartbeat or pacemaker but I'm not sure it would work good over WAN. There's another problem ... database stores personal data, so to transfer it over WAN I should probably think of deploying some kind of a VPN connection between the servers. Another thing I think of is to use lsyncd + MySQL replication (Master / Master) over such VPN connection and on top of it I could just try round-robin DNS failover (two A records for domain). What do you think of that?
Thanks for the reply TB0ne! Yeah, I thought of heartbeat or pacemaker but I'm not sure it would work good over WAN. There's another problem ... database stores personal data, so to transfer it over WAN I should probably think of deploying some kind of a VPN connection between the servers. Another thing I think of is to use lsyncd + MySQL replication (Master / Master) over such VPN connection and on top of it I could just try round-robin DNS failover (two A records for domain). What do you think of that?
Yes, that could definitely work. However, when you get into VPN connections, etc., there is going to be an admin cost involved in the time taken to set it up, document it, and keep it maintained later. Might be better to go low-tech in a way, but when you say WAN, do you mean the Internet, or a company-paid-for dedicated WAN? If it's your own circuit, you shouldn't have to worry too much about a VPN or security methods, since you (ostensibly), OWN the connection.
You could write some simple scripts to do this, as an alternative. On the primary system:
The script will take a MySQL dump of your database, grab a copy of your web page(s), tar it all up, then PGP encrypt it and SCP it over to the backup system.
On the backup system:
A script will verify the primary is up. You can do this with wget, curl, or even a simple ping. If it fails, decrypt the tar file, restore the MySQL data, copy the web pages into place, and bring up MySQL and http.
Neither will be too difficult to write, and easy to maintain. If you're not looking for INSTANT failover, and would be happy with a couple of minutes, it shouldn't be too complicated.
I'd definitely be happy with a couple of minutes with failover. Unfortunately by "WAN" I mean the Internet and as for now there is no chance of getting any dedicated circuits (mainly because of their costs). You get me right - I want something that would be easy to maintain and doesn't take too much time to get it running. What matters here the most is good performance and the most important - security (personal data).
I'd definitely be happy with a couple of minutes with failover. Unfortunately by "WAN" I mean the Internet and as for now there is no chance of getting any dedicated circuits (mainly because of their costs). You get me right - I want something that would be easy to maintain and doesn't take too much time to get it running. What matters here the most is good performance and the most important - security (personal data).
I'd script it myself then, and keep it simple. Should only really be a few commands, and you can cron it to run the backup/transfer a few times a day. If you PGP encrypt the tar file, you'll be fairly secure, and if you use a keyswapped scp command, that'll not only be easy to script, but fairly secure too. That'll get your data to the other machine. From there, it's just a matter of decrypt/decompress/load, and fire up the services.
Aside from the replication (use a IPSEC VPN between data-centers!) use an external DNS provider/monitoring company to monitor the primary and then alter DNS accordingly if it fails.
Our provider has very low (<30 second) TTL/Cache times defined for our DNS entries. On the few occasions we've needed it it's been very responsive.
Unless I'm missing something, I don't get the pgp bit. scp is encrypted anyway and the OP has control over the 2 servers...
I only mentioned it because the OP wanted more security, and it's a trivial thing to script. You are right...it is optional, and scp encryption should be fine. But the OP did say that while they have control over the servers, they are using the Internet to connect them.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.