LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 02-24-2014, 11:30 PM   #1
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
mysqldump for binary log backup scheme: needs --all-databases?


Is it necessary or useful to use mysqldump's --all-databases option when mysqldump is being run as part of a binary log backup/restore scheme?

There's nothing in the binary logging configuration which is specific to individual databases so presumably the binary logs have "database changing events" for all databases.

If that's right the only (?) way to do a point-in-time restore would be to restore all databases from the last mysqldump output and re-play all "database changing events" since then.

But I have doubt: if my understanding is right, why is the need for --all-databases not mentioned in documentation about binary log backup/restore schemes?
 
Old 07-28-2014, 10:46 PM   #2
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Our offsite mysqldump of all databases is taking too long; network speed is the bottleneck. Most of the databases have historical data only so do not need frequent backups.

Can mysqldump be done for individual databases as part of a backup/restore system using binary logging?

Alternatively, can binary logging for all databases be used in conjunction with an individual database's mysqldump to recover the individual database (and without affecting any of the other databases)?

EDIT: am I asking the wrong question, assuming a poor solution and trying to make it work? Normally we would be using the binary logs to do replication but our network connections are unreliable as well as slow so AFAIK that is not an option ... ?

Last edited by catkin; 07-28-2014 at 11:11 PM.
 
Old 07-29-2014, 02:46 AM   #3
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7 / 8
Posts: 3,538

Rep: Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593
Why not do your dumps on your master server, gzip them with --rsyncable and then rsync them to your other server?
 
Old 07-29-2014, 06:36 AM   #4
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Thanks TenTenths

In essence, that's what we do with minor variations.

We don't have a MySQL master/slave setup because our network connections are unreliable and I understand the master will freeze if it loses connection with the slave.

So we need some other way to avoid loosing data if the MySQL server crashes. The "cunning plan" was to run a script every hour that:
  1. First run of the day:
    1. Create a directory for the days' backups.
    2. mysqldump all databases and bzip2 compress it in the day's directory.
  2. Subsequent runs: flush the binlogs and rsync today's to the day's directory.
  3. All runs: rsync the day's directory to the remote server.
That was OK until the data grew and it took more than an hour to copy the mysqldump file to the remote server so the first run of the day had not finished when the second started.

80% of the data is historical and unchanging. There is one database per year. There is no need to back them all up daily. The only reason for doing so is that I have not found any information about doing a restore of a single database using binlogs.

We are considering setting up two instances of MySQL server, one for the historical databases and one for the current. That will need changes in the application ...
 
Old 07-30-2014, 02:53 AM   #5
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7 / 8
Posts: 3,538

Rep: Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593Reputation: 1593
Quote:
Originally Posted by catkin View Post
We don't have a MySQL master/slave setup because our network connections are unreliable and I understand the master will freeze if it loses connection with the slave.
The master will continue just fine if the slave loses / closes the connection.

Simplistically:
With MySQL replication the Master doesn't actually care what the slaves are doing. The Master just ticks along and writes its binlog, the slave keeps track of how far along the master binlog it is (the file and the file position) and when it connects it says to the master "send me from here" and reads along and updates its knowledge of position. If the connection drops then the slaves knows it's read up to position X/Y and when it re-connects it does the same again.

However, if you have a short binlog retention time on the Master and the connection is lost beyond that time (for example if you only hold 1 day worth of binlog and lose connection for 25 hours) then you've problems.

We have a monitoring script that runs on our slave servers which checks the SQL replication processes are running and also how out of sync the slave is from the master and if it exceeds a threshold it sends a "passive" alert to our Nagios instance.

Quote:
Originally Posted by catkin View Post
So we need some other way to avoid loosing data if the MySQL server crashes.[..]
80% of the data is historical and unchanging.
Consider adding a timestamp column to each table called "last_modified", set the default value to CURRENT_TIMESTAMP and set "on update CURRENT_TIMESTAMP" Then you can do a "selective dump" of only the changed items and import that selective dump in to your slave database.
 
1 members found this post helpful.
Old 08-01-2014, 05:30 AM   #6
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578

Original Poster
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Thanks TenTenths

That's very useful information. So I really was assuming a poor solution and trying to make it work. The shortage of answers to seemingly obvious questions did suggest that.

We had better migrate to a master/slave arrangement.

Regards the CURRENT_TIMESTAMP suggestions, it's is easier than that because the historical data is in separate databases, one for each past year.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Backup all databases nightly w/ mysqldump automatically LXer Syndicated Linux News 0 12-06-2011 02:30 AM
[SOLVED] How to restore mysqldump --all-databases backup ramecare Linux - Server 4 07-25-2011 04:58 AM
mysqldump help on multiple databases sir-lancealot Linux - Server 2 03-11-2011 12:04 AM
Mysql backup: mysqldump problem sshd.root Linux - Server 2 02-29-2008 11:12 AM
mysqldump - backup *.* bpk Linux - Newbie 1 03-29-2004 05:42 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 03:04 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration