LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 08-22-2013, 06:15 AM   #1
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92
Blog Entries: 1

Rep: Reputation: Disabled
Need opensource mysql backup tools for fast backup


Hello friends,


I am using mysql server and my question is regarding to mysql backup.
my server has arround 250GB data in database for the backup now i am using mysqldump script with
gzip compression utility for backup, But restoration time its take 3 days approximate.

i want to improve my mysql backup & restore process speed.
Is there any opensource mysql backup tools which can
faster backup and restore the database?
if exist then please share with me.

i am using this mysqldump scripts

#!/bin/sh

export PATH="$PATH:/usr/bin:/usr/local/bin"

dumpuser="mysql"
dumphost="test.example.com"
hostname=$(hostname -f)
date=$(date +%Y-%m-%d-%H)
year=$(date +"%Y")
week=$(date +"%V")

## send to backup storage ##

mysqldump -u mysql -p"password" -Q -e --events --default-character-set=latin1 --all-databases | gzip | ssh ${dumpuser}@${dumphost} "mkdir -p ${hostname}/${year}/week-${week}; cat > ${hostname}/${year}/week-${week}/${date}.sql.gz"


Thanks in advance
 
Old 08-22-2013, 07:27 AM   #2
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
A couple of hints and tweaks.

When you execute mysqldump, it is frequently simply used
Code:
mysqldump db_name > backup-file.sql
which applies all the defaults. Typically, that's about the "best" way to do it and, of course, the backup-file.sql can be whatever name and paths that you want. Piping through gzip is probably not such a good idea -- wait till the back up is created and then gzip it -- you're using a lot of memory in that pipeline. Might even be better to do the back up locally (if you've got the space), gzip it and then scp it to your back up platform. You'd do that in a cron job or however you choose.

Reloading from a back up is going to take a great deal of time if you have a large data base that possibly includes stored procedures and triggers, large indexes and the like. You're going to execute the stored procedures as you load, every row, every procedure. You're also going to create the indexes, all of them, as you go. Takes time, frequently a lot of time.

I use mysqldump to only unload the content of the tables, not the schema for them, so I have only data in the output -- I have done that by executing mysqldump with individual table names (which works well) and the content is only rows (and the file name would be something like table-name.unl or whatever extension you choose). You can do this in a for loop in a shell program; e.g., for table in $(cat table-names-file) or something similar.

What I do with large data bases is maintain a schema file (the tables only), a procedures file (the triggers and stored procedures) and an index file (the indexes) as separate files; e.g., schema.sql, indexes.sql and prodedures.sql. At the beginning of the the schema.sql file is a drop database if exists directive followed by the create database directive. That creates the tables and gives you a clean space to write in.

Then I load the tables -- that goes really fast because you're simply inserting rows in the tables without any overhead of indexes (or stored procedures). That's where individual table unload files are handy, load them one at a time in whatever order you wish, you're simply inserting content into rows.

Then I execute the indexes.sql file, creating the indexes.

Then I execute the procedures.sql file, creating the triggers and stored procedures.

What happens is that it goes really fast -- loading tables is fast, creating indexes is fast and creating triggers and stored procedures is fast (because they won't execute, they'll just be created).

Now, it's a little bit of work up front to split the schema, indexes and procedures into three separate files (not that much, though) and the speed of recovery more than makes up for the couple of hours work.

I've used this method to restore data bases with 50 or so tables containing millions of rows and it has worked quite well for me.

Another option is that if you can identify rows in all the tables that have changed since a specific date, you can simply back up those (rather than the entire data base) and restore them in case of a failure, a kind of rolling back up scheme (start with a full back up, incrementally back up only the content of tables that have changed). That's more painful to set up but it's doable if your data base design lends itself to making it practical (many don't). For example, with a transaction data base you can easily select the transactions for a given period and back those up because there's a time-date stamp on the transactions by day or week or whatever time period works for you.

The net result of the above is a little work on your part to set it up followed by a fast data base restoration (usually measured in hours rather than days). Try to avoid doing it across your LAN if possible, try to avoid gzip in a pipe if possible. If you're unloading table data one table at a time that will go quite quickly and simply gzip the unloaded data files as they finish unloading. On the reload side, you gunzip a table data file and load it then gizp it again if disk space is tight.

This has worked for me with significantly large data bases, you may want to give it a try.

Hope this helps some.
 
2 members found this post helpful.
Old 08-22-2013, 07:44 AM   #3
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92

Original Poster
Blog Entries: 1

Rep: Reputation: Disabled
Thanks Tronayne your views is correct,

just asking Is there any backup tools and packages so that i can easily manage my mysql backup and tune my performance fast?
 
Old 08-22-2013, 07:51 AM   #4
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux systemd
Posts: 4,278

Rep: Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694Reputation: 1694
https://github.com/holland-backup/holland

Ive been using this for a couple months now. Fast.
 
Old 08-22-2013, 08:31 AM   #5
tronayne
Senior Member
 
Registered: Oct 2003
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541

Rep: Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065Reputation: 1065
Quote:
Originally Posted by jitendra.sharma View Post
just asking Is there any backup tools and packages so that i can easily manage my mysql backup and tune my performance fast?
Sorry that I don't know of any (other than mysqldump) which is, you know, the built-in utility and has all those nifty options. You might -- might -- what to look at msqladmin (a command line only tool, I think) or PhpMyAdmin and see if one of those might work for you. I tend to avoid GUI applications just because I'm old-fashioned and prefer to use the command-line utilities where I've read through the manual and tried the various options to see what they do and how they perform. There is a GUI admin client for MySQL (darned if I remember what it is right now), oh, yeah, it's MySQL-Workbench, might fit your needs.

However, because I don't use them, I can't really recommend one or the other.

Hope this helps some.
 
Old 08-22-2013, 09:07 AM   #6
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92

Original Poster
Blog Entries: 1

Rep: Reputation: Disabled
Thanks tronayne for your valuable guideline...
 
Old 08-22-2013, 09:58 AM   #7
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7
Posts: 3,475

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
Also take a look at mydumper/myloader (for example this article: http://www.palominodb.com/blog/2011/...up-and-restore)

This does something similar to mysqldump but runs multi-threaded, mydumper dumps the data and myloader loads the data back in for restore.

While the concept is sound it didn't really help much with our (100M+ row) database as we had complex indexes in tables and it was the indexing that was causing the slow restore.
 
Old 08-23-2013, 03:18 AM   #8
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92

Original Poster
Blog Entries: 1

Rep: Reputation: Disabled
Hello friends
I found a tool "Percona Xtra-Backup" for mysqlbackup.
Anybody have experiance regarding this tools with mysql database backup?
 
Old 09-18-2013, 08:32 AM   #9
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92

Original Poster
Blog Entries: 1

Rep: Reputation: Disabled
Thanks to all
i solved my query with "Percona Xtra-Backup" utility.
 
Old 09-18-2013, 09:17 AM   #10
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,636

Rep: Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965Reputation: 7965
Quote:
Originally Posted by jitendra.sharma View Post
Thanks to all
i solved my query with "Percona Xtra-Backup" utility.
Right..you said that before a month ago. Is there a reason to tell us again?
 
Old 09-18-2013, 11:50 PM   #11
jitendra.sharma
Member
 
Registered: Mar 2013
Location: ahmedabad
Distribution: CentOS, Ubuntu, Debian
Posts: 92

Original Poster
Blog Entries: 1

Rep: Reputation: Disabled
Hi TB0ne,
i want to close my thread with solved options thats why i gave the conclusion and mark thread as solved so that it can be helpful for any other person in google search.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Best parameters for a -fast- HDD backup via dd Ambrosia Linux - Software 7 06-22-2010 11:08 AM
Fast and accurate backup of partiton? damgar Linux - Software 6 12-13-2009 08:52 PM
how to create backup MYSQL Script to backup my database for every 1hour RMLinux Linux - Newbie 3 11-20-2008 10:13 AM
Backup Tools j3nn1 Linux - Newbie 3 05-03-2008 08:08 PM
Backup tools Fill Linux - Software 2 06-30-2007 03:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 01:56 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration