A couple of hints and tweaks.
When you execute
mysqldump, it is frequently simply used
Code:
mysqldump db_name > backup-file.sql
which applies all the defaults. Typically, that's about the "best" way to do it and, of course, the
backup-file.sql can be whatever name and paths that you want. Piping through
gzip is probably not such a good idea -- wait till the back up is created and then
gzip it -- you're using a lot of memory in that pipeline. Might even be better to do the back up locally (if you've got the space),
gzip it and then
scp it to your back up platform. You'd do that in a cron job or however you choose.
Reloading from a back up is going to take a great deal of time if you have a large data base that possibly includes stored procedures and triggers, large indexes and the like. You're going to execute the stored procedures as you load, every row, every procedure. You're also going to create the indexes, all of them, as you go. Takes time, frequently a lot of time.
I use
mysqldump to only unload the content of the tables, not the schema for them, so I have only data in the output -- I have done that by executing
mysqldump with individual table names (which works well) and the content is only rows (and the file name would be something like
table-name.unl or whatever extension you choose). You can do this in a for loop in a shell program; e.g.,
for table in $(cat table-names-file) or something similar.
What I do with large data bases is maintain a schema file (the tables only), a procedures file (the triggers and stored procedures) and an index file (the indexes) as separate files; e.g.,
schema.sql,
indexes.sql and
prodedures.sql. At the beginning of the the
schema.sql file is a
drop database if exists directive followed by the
create database directive. That creates the tables and gives you a clean space to write in.
Then I load the tables -- that goes really fast because you're simply inserting rows in the tables without any overhead of indexes (or stored procedures). That's where individual table unload files are handy, load them one at a time in whatever order you wish, you're simply inserting content into rows.
Then I execute the
indexes.sql file, creating the indexes.
Then I execute the
procedures.sql file, creating the triggers and stored procedures.
What happens is that it goes really fast -- loading tables is fast, creating indexes is fast and creating triggers and stored procedures is fast (because they won't execute, they'll just be created).
Now, it's a little bit of work up front to split the schema, indexes and procedures into three separate files (not that much, though) and the speed of recovery more than makes up for the couple of hours work.
I've used this method to restore data bases with 50 or so tables containing millions of rows and it has worked quite well for me.
Another option is that if you can identify rows in all the tables that have changed since a specific date, you can simply back up those (rather than the entire data base) and restore them in case of a failure, a kind of rolling back up scheme (start with a full back up, incrementally back up only the content of tables that have changed). That's more painful to set up but it's doable if your data base design lends itself to making it practical (many don't). For example, with a transaction data base you can easily select the transactions for a given period and back those up because there's a time-date stamp on the transactions by day or week or whatever time period works for you.
The net result of the above is a little work on your part to set it up followed by a fast data base restoration (usually measured in hours rather than days). Try to avoid doing it across your LAN if possible, try to avoid
gzip in a pipe if possible. If you're unloading table data one table at a time that will go quite quickly and simply
gzip the unloaded data files as they finish unloading. On the reload side, you
gunzip a table data file and load it then
gizp it again if disk space is tight.
This has worked for me with significantly large data bases, you may want to give it a try.
Hope this helps some.