LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 11-05-2006, 09:38 PM   #1
Swakoo
Member
 
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508

Rep: Reputation: 30
Is it possible to acheive HA on a MYSQL replicated setup?


Currently my database server running mysql is on RAID 1, and I have another server querying it to do a daily backup. So that's the only protection I have.

I am looking into doing a more active backup, so exploring replication. But I realise replication don't do auto failover... or so I read.

Is there a way to achieve it? To achieve auto failover, load-balanced environment so as to ensure HA?

Can anyone please advice?

thanks!
 
Old 11-06-2006, 12:07 AM   #2
mcrbids
LQ Newbie
 
Registered: Jan 2006
Posts: 9

Rep: Reputation: 0
Don't look at possible - look at feasible.

Quote:
Originally Posted by Swakoo
Currently my database server running mysql is on RAID 1, and I have another server querying it to do a daily backup. So that's the only protection I have.

I am looking into doing a more active backup, so exploring replication. But I realise replication don't do auto failover... or so I read.

Is there a way to achieve it? To achieve auto failover, load-balanced environment so as to ensure HA?

Can anyone please advice?

thanks!
I've run database-driven stuff for years. HA is over-priced and over-rated in all but the most extreme cases. A cheap-o 1U PIII on E-Bay is plenty powerful enough for a surprising number of cases, and will usually deliver 3-4 nines (99.9% to 99.99%) on a shoestring.

Once you get ego out of the way, you'd be surprised how rarely anybody actually needs more than this. At 99.99%, you have about 8 hours of downtime per year. Be honest - what would happen if your system was down 1 day every year or so?

To go HA and get to 5 nines 99.999% is less than 1 hour per YEAR.

And the price is constant vigilance. If hiring a qualified technician full-time in order shave out that 1 or 2 business days per YEAR is out of the question, it's unlikely that you need to worry about it.

Having any system that "cuts over" automatically in a failure is a tremendous pain in the arse. A techie at the datacenter fat-fingers the switch on a power strip, and the 5 minutes of downtime on your database server morphs into a sleepless night rebuilding your primary database on the server and resetting all your logic servers to use the primary DB server again.

Yuck.

My advice? Write a script that backs up your database every hour and copies it to a remote location with scp or rsync over ssh. If you want, you can have a "hot" backup cheezo PIII that loads the database hourly as well, so that if you have to cut over, you change a setting on your web servers and you're done.
 
Old 11-06-2006, 01:30 AM   #3
Swakoo
Member
 
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508

Original Poster
Rep: Reputation: 30
Hi, thanks for your pointers. Noted

Currently, I am already running a spare machine (doing RAID 5 though, hehe) which draws the database from the live production server everyday (5am) using scp over rsync. So that gives me 24hr backup at best (should my RAID1 fail). Can't do it hourly as our dB is busy most of the time.

I'm currently studying HA and realise... I can't just do it with just any distro. I need a cluster-able Distro to do it like Redhat Cluster suite.. to achieve my HA and LB... I suppose that's what you meant by the 'feasibility' factor.

Thus I am looking at ways to backup my dB on a 'live' basis which leads me to exploring replication. But while that gives me 'almost' by the minute backup, it doesn't roll over automatically, and hence as you mention.. probably cater to that 8 hours of downtime per year. It actually downed more than 8hrs this year because the sheer amount of traffic and database is huge.. but well..

so.. do u think I should just rely on replication.. and manually point to the 'slave' machine should it fail... or...? because initially i thought a 'master-slave' replication setup meant that the slave would kick in if the master goes down. Apparently not.

But as I said.. noted your points. very logical indeed

Quote:
Originally Posted by mcrbids
I've run database-driven stuff for years. HA is over-priced and over-rated in all but the most extreme cases. A cheap-o 1U PIII on E-Bay is plenty powerful enough for a surprising number of cases, and will usually deliver 3-4 nines (99.9% to 99.99%) on a shoestring.

Once you get ego out of the way, you'd be surprised how rarely anybody actually needs more than this. At 99.99%, you have about 8 hours of downtime per year. Be honest - what would happen if your system was down 1 day every year or so?

To go HA and get to 5 nines 99.999% is less than 1 hour per YEAR.

And the price is constant vigilance. If hiring a qualified technician full-time in order shave out that 1 or 2 business days per YEAR is out of the question, it's unlikely that you need to worry about it.

Having any system that "cuts over" automatically in a failure is a tremendous pain in the arse. A techie at the datacenter fat-fingers the switch on a power strip, and the 5 minutes of downtime on your database server morphs into a sleepless night rebuilding your primary database on the server and resetting all your logic servers to use the primary DB server again.

Yuck.

My advice? Write a script that backs up your database every hour and copies it to a remote location with scp or rsync over ssh. If you want, you can have a "hot" backup cheezo PIII that loads the database hourly as well, so that if you have to cut over, you change a setting on your web servers and you're done.
 
Old 11-08-2006, 05:42 PM   #4
mcrbids
LQ Newbie
 
Registered: Jan 2006
Posts: 9

Rep: Reputation: 0
Quote:
Originally Posted by Swakoo
Hi, thanks for your pointers. Noted

Thus I am looking at ways to backup my dB on a 'live' basis which leads me to exploring replication. But while that gives me 'almost' by the minute backup, it doesn't roll over automatically, and hence as you mention.. probably cater to that 8 hours of downtime per year. It actually downed more than 8hrs this year because the sheer amount of traffic and database is huge.. but well..

so.. do u think I should just rely on replication.. and manually point to the 'slave' machine should it fail... or...? because initially i thought a 'master-slave' replication setup meant that the slave would kick in if the master goes down. Apparently not.
Like I've said - replication mostly REDUCES uptime, not improves on it, because there are so many things that can go wrong. I've seen more problems due to replication errors, partial switchover failures, partial failures, and network burps causing more downtime than I've ever seen caused by even catastrophic failure. (EG: motherboard catching fire)

If you are really sure you want to try HA, my suggestion would be to go ahead and replicate to your backup host, and don't actually use your backup host. If your primary fails, then reconfig your backup host as the primary, and change your logic/web servers to use the backup host manually.

I'd suggest a set of scripts (I'd use SSH with RSA keys so it's automatic) that do this all in one fell swoop, to switch from production to failover, and back again. Test them at least monthly, at night or something. Automate the test, as well, so that it's easily enough done that you might actually do it on a regular basis.

HA is non trivial, and I've never seen the business case where it was actually warranted. If you can't justify a full-time DBA position to make sure that database is 100% 24x7, you probably should be looking at having a hot backup system and manual failover, manually propogated every hour or so, with a promise of 1-2 hour turnaround during business hours in the case of a failure.

Make sure to have backups offsite. I use rsync over ssh and a set of scripts to do this - it works rather well. http://www.effortlessis.com/backupbuddy
 
Old 11-15-2006, 03:34 AM   #5
Swakoo
Member
 
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508

Original Poster
Rep: Reputation: 30
i see your point on relying on 'manual' forces for that last 1%

will be playing with it furthur.. will come with questions if i have any

thanks people!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
MySQL Setup lostnhell Linux - Server 30 08-09-2009 11:25 AM
MySQL setup. sikofitt Programming 1 09-29-2005 04:43 AM
How to setup MySQL in mandrake? sebasjuh Mandriva 1 09-10-2004 07:03 AM
MySQL setup stops at command "./configure --prefix=/usr/local/mysql" k41184 Linux - Software 1 05-20-2004 02:44 PM
PHP MySQL setup pnh73 Linux - General 2 09-06-2003 09:37 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 06:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration