LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 05-10-2012, 03:18 PM   #1
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Dumping MySQL


I see that mysqldump outputs the databases as SQL to re-create. That might make sense. But I wonder if that scales. If I have several terabytes of data, and this several terabytes of SQL dumped, and need to restore this, does it do this without needing several terabytes of RAM to cache the SQL, or even several terabytes of extra disk space above and beyond the space it will be creating databases in?

I guess if it can handle SQL as a serialized data stream, which it is just deserializing, that could work. SQL just doesn't seem to be a language that can do that very easily. It seems to me that some serializing format like JSON (not necessarily the exact same as JSON) would be better.
 
Old 05-10-2012, 03:31 PM   #2
Kustom42
Senior Member
 
Registered: Mar 2012
Distribution: Red Hat
Posts: 1,604

Rep: Reputation: 415Reputation: 415Reputation: 415Reputation: 415Reputation: 415
It will probably kill your system on I/O with that much data trying to be read by MySQL but it should complete(maybe next year) as long as you are doing it via the mysql CLI in SSH and not trying to use something like PHPMyAdmin.

I have never heard of anyone with that much data inside of MySQL, thats pretty nuts.
 
Old 05-10-2012, 03:32 PM   #3
Kustom42
Senior Member
 
Registered: Mar 2012
Distribution: Red Hat
Posts: 1,604

Rep: Reputation: 415Reputation: 415Reputation: 415Reputation: 415Reputation: 415
You can also look at using the 'quick' option for mysql CLI as it will read one row of data at a time as opposed to trying to read an entire table structure at a time(MySQL Default).
 
Old 05-10-2012, 03:45 PM   #4
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by Kustom42 View Post
It will probably kill your system on I/O with that much data trying to be read by MySQL but it should complete(maybe next year) as long as you are doing it via the mysql CLI in SSH and not trying to use something like PHPMyAdmin.

I have never heard of anyone with that much data inside of MySQL, thats pretty nuts.
So Postgres instead?
 
Old 05-10-2012, 08:18 PM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,358

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I think you can tune mysqldump to create multi-row inserts, saving on disk space. I don't think it puts everything in memory, then to disk, I think it streams in/out to/from disk.
For storage most people pipe through gzip/bzip2.
Also, consider mysqlhotcopy for ISAM or archive files, otherwise you might need the 'MySQL Enterprise Backup' for Innodb https://dev.mysql.com/doc/mysql-back...p-methods.html
 
Old 05-11-2012, 11:45 AM   #6
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by chrism01 View Post
I think you can tune mysqldump to create multi-row inserts, saving on disk space. I don't think it puts everything in memory, then to disk, I think it streams in/out to/from disk.
For storage most people pipe through gzip/bzip2.
Also, consider mysqlhotcopy for ISAM or archive files, otherwise you might need the 'MySQL Enterprise Backup' for Innodb https://dev.mysql.com/doc/mysql-back...p-methods.html
I do use "xz -9" on my dumps. Saves more space than gzip or bzip2 do.

I would like to see a tool that is the SQL equivalent of rsync. It would be nice if it could work across different database types (for example, to do a one time sync of all the tables of one database in MySQL over to a specified database in Postgres).
 
Old 05-13-2012, 08:45 PM   #7
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,358

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
I think finding an existing tool for that would be unlikely (but not impossible; try sourceforge)
However, I have written similar stuff like that in the past for data take-ons etc using Perl.
You can open multiple db handles to different dbs/schemas simultaneously, then ETL (Extract / Transform / Load) the data from one to the other.
I've even had ones that run continuously, sort of like an rsync if you like...


Quote:
do use "xz -9" on my dumps. Saves more space than gzip or bzip2 do.
Just out of curiosity, did you compare like with like ie gzip -9, bzip2 -9, xz -9; the results tend to differ depending on the exact data.

Last edited by chrism01; 05-13-2012 at 08:47 PM.
 
Old 05-16-2012, 08:16 PM   #8
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,684

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by chrism01 View Post
I think finding an existing tool for that would be unlikely (but not impossible; try sourceforge)
However, I have written similar stuff like that in the past for data take-ons etc using Perl.
You can open multiple db handles to different dbs/schemas simultaneously, then ETL (Extract / Transform / Load) the data from one to the other.
I've even had ones that run continuously, sort of like an rsync if you like...



Just out of curiosity, did you compare like with like ie gzip -9, bzip2 -9, xz -9; the results tend to differ depending on the exact data.
I did only a simple compression test. xz worked best, though slower. From experience with these compression methods, I expected xz to win on space and gzip to win on time. And that is how it turned out. Most cases are like that.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mysql dumping Elguapo Programming 3 08-30-2007 08:54 PM
Backing up MySQL without the dumping jme Linux - Software 8 05-27-2006 12:01 AM
dumping mysql db script with dates markehb Linux - Software 2 04-23-2006 10:11 PM
MySQL, dumping, restoring, the "mysql" db johnMG Linux - Software 3 02-13-2006 03:36 PM
Novell is dumping KDE, so I'll be dumping SuSE KimVette SUSE / openSUSE 10 11-12-2005 08:09 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 06:15 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration