LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 02-15-2011, 03:57 AM   #1
karll
LQ Newbie
 
Registered: Sep 2009
Location: United Kingdom
Distribution: Ubuntu, RHEL, Fedora
Posts: 15

Rep: Reputation: 0
Question MySQL backup - how to deal with large amounts of data?


Hi,

Hopefully someone can give me some good advice on this:

I help run a couple of Internet forums and recently we've been trying to become a bit more serious about backup. It seems the better way to do MySQL backup is to use the binlog. However, that binlog is huge! We seem to produce something like 10Gb per month.

I'd like to copy the backup to somewhere off the server as I don't feel like there is much to be gained by just copying it to somewhere else on the server. I recently made a full backup which after compression amounted to 2.5Gb and took me 6.5 hours to copy to my own computer ... So that solution doesn't seem practical for the binlog backup.

Any ideas? Should we rent another server somewhere? Is it possible to find a server like that really cheap? Or is there some other solution? What are other people's MySQL backup practices?
 
Old 02-15-2011, 06:38 AM   #2
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,189

Rep: Reputation: 105Reputation: 105
Yup. binlog can get huge. Typically, though, you think of that as an incremental, and you want to run full backups using mysqldump with incrementals in between. See http://dev.mysql.com/doc/refman/5.1/...p-methods.html. You can script that up yourself, with the appropriate table locking, copying the binlog somewhere, and running mysqldump with --flush-logs, but you could also use ZRM for MySQL and manage everything through that. See http://www.zmanda.com/backup-mysql.html.
 
Old 02-15-2011, 07:30 AM   #3
karll
LQ Newbie
 
Registered: Sep 2009
Location: United Kingdom
Distribution: Ubuntu, RHEL, Fedora
Posts: 15

Original Poster
Rep: Reputation: 0
Thanks choogendyk - yes, I have looked into ZRM briefly and might be using that once we have decided on the bigger picture. And yes, running full backups occasionally in between incremental backups makes sense. Because of the size of the databases, I'd prefer not to use mysqldump (which I imagine would create a humonguos ascii file). If mysql is shut down, I should be able to just copy the /var/lib/mysql/ directory, yes? (mysqlhotcopy is out of the question since we have all sorts of table types (InnoDB etc.))

There were some interesting user comments to the mysql article. Does anyone here use Amazon S3 for storage of backups? Or some similar solution?

Also, what is that snapshot filesystem they talk about in one of the user comments? If I understand correctly, it's a way to copy the database really fast? Would that be difficult to set up on a running server like we have?
 
Old 02-15-2011, 07:45 AM   #4
jcalzare
Member
 
Registered: Aug 2009
Location: Chicago
Distribution: CentOS
Posts: 114

Rep: Reputation: 34
Mysqldump is the best and most appropriate way to take legitimate full backups without using third-party software. Since the output is a text file, you can get very excellent compression using gzip. I guess I don't understand how you think a mysql dump will be larger than trying to rsync your entire mysql directory.

If this is a dedicated server hosted somewhere, you may want to ask them to install an additional hard drive to use for backup space. This way, you can copy mysql dumps (or however you want to do it) to the separate hard drive for safe keeping. This is sufficient day-to-day backups. I would then suggest downloading a copy monthly to your PC for disaster recovery purposes in the event that your datacenter blows up or something.
 
Old 02-15-2011, 12:25 PM   #5
karll
LQ Newbie
 
Registered: Sep 2009
Location: United Kingdom
Distribution: Ubuntu, RHEL, Fedora
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by jcalzare View Post
Mysqldump is the best and most appropriate way to take legitimate full backups without using third-party software. Since the output is a text file, you can get very excellent compression using gzip. I guess I don't understand how you think a mysql dump will be larger than trying to rsync your entire mysql directory.
I'm thinking the mysql directory is basically "binary" whereas the dump file from mysqldump is "ascii" and with other overhead in addition. Also, it presumably will take a long time to create this dump and then compress it afterwards. If I can simply copy one folder, then that would be better.

Quote:
If this is a dedicated server hosted somewhere, you may want to ask them to install an additional hard drive to use for backup space. This way, you can copy mysql dumps (or however you want to do it) to the separate hard drive for safe keeping. This is sufficient day-to-day backups. I would then suggest downloading a copy monthly to your PC for disaster recovery purposes in the event that your datacenter blows up or something.
Thanks, yes that sounds like a reasonable solution. I might look into that.
 
Old 02-16-2011, 02:15 AM   #6
sneakyimp
Member
 
Registered: Dec 2004
Posts: 795

Rep: Reputation: 50
First off, I don't think there's much sense in making a backup from one file on a hard drive to another location on that hard drive for a couple of reasons:
* Performance is terrible. The hard drive arm is going to be seeking back and forth reading a file here and copying it over there -- very bad performance! With 10GB backups, you jeopardize the stability of your server! It may crash if the hard drive gets too busy.
* This type of backup is no help at all if your hard drive fails. Your hard drive fails and your original data is gone and your back is gone and you are S.O.L. Same-disk backup will only protect you if you happen to get some data corruption that is localized to a particular file.

Making a backup to a separate hard drive is MUCH faster and should dramatically improve your backup performance speed and, more importantly, you'll be protected in the event of a catastrophic hard drive failure. However, 10GB is still pretty large. It might take minutes just to copy the data under optimal conditions. Still might take too long.

Mysqldump is pretty awesome and is always a viable option. Mysqldump is pretty fast and, depending on what options you choose, may be comparable in size to your original binary data and also more portable. Just copying your folders will mean that your backup needs a mysql setup that is more or less identical to your current setup which may not be easy to set up. Another cool thing about mysqldump is that you could run it from an entirely separate machine (perhaps a cheap, wimpy one whose sole job is making db backups). As long as the network between the machines is fast (say GB ethernet) then it's roughly as fast as one internal to the machine.

Another option is a bare-metal backup service like the "Guardian" product offered by liquidweb (other hosting companies have similar products). The idea with Guardian is that it continuously backs up your system's data to an offsite location. The backup is constantly being updated and is essentially up to the minute and is a backup not just of your database but other files as well (images, source, etc.). It's pricey (I think around $100 or more a month) but seems to be pretty awesome.
 
Old 02-17-2011, 12:01 AM   #7
chrism01
Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 6.5, Centos 5.10
Posts: 16,261

Rep: Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028Reputation: 2028
You should be able to create a mysqldump fairly quickly to another disk in the same machine & it guarantees a consistent view of the DB.
It won't matter then if it takes a while to compress it on-site, then txfr to a remote site. The orig DB will still be running just fine.
If you're really paranoid, you can 'nice' down the gzip & txfr job.
ASCII tends to compress very effectively.
Also, as above, your replacement/test systems may not be identical to the live system, but you can edit an ASCII file if reqd. Not possible with binlog. It's just a more flexible solution.

Last edited by chrism01; 02-17-2011 at 12:03 AM.
 
Old 02-18-2011, 03:36 AM   #8
karll
LQ Newbie
 
Registered: Sep 2009
Location: United Kingdom
Distribution: Ubuntu, RHEL, Fedora
Posts: 15

Original Poster
Rep: Reputation: 0
Thanks everyone for the input - much appreciated. You all make good points. I now realise that it's possibile my scepticism towards mysqldump is unwarrented, so I will do some tests to see how well it performs. The flexibility it offers is definitely a plus.
 
Old 02-18-2011, 09:51 AM   #9
trickykid
Guru
 
Registered: Jan 2001
Posts: 24,133

Rep: Reputation: 197Reputation: 197
If you're using gzip as the compression, you can also look into bzip2 or even rzip, they might take a little longer to compress but you'll get a better compression ratio so the backups are smaller and easier to manage as well.
 
  


Reply

Tags
backup, mysql


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Writing large amounts of data to multiple CD/DVD's enine Linux - General 1 09-03-2009 09:32 AM
Hints? Managing large amounts of data, specifically photo archives haertig Linux - General 1 01-06-2009 08:33 AM
Kernel panics when trying to transfer large amounts of data from or to my hardrive CuriouserAndCur Debian 3 01-10-2007 11:53 AM
Using wget to copy large amounts of data via ftp. AndrewCAtWayofthebit Linux - General 1 05-11-2006 11:55 AM
rm command is choking on large amounts of data? Jello Linux - General 18 02-28-2003 07:11 PM


All times are GMT -5. The time now is 01:54 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration