Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
09-22-2005, 09:39 PM
|
#1
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Rep:
|
mysqldump : Can I split the file up to 2GB max per file?
As some of you guys noticed, I am working on the mysqldump to maintain a daily backup.
Problem is my live database servers are all on 64bit machine (thus 4GB/file limitation). My backup server which is going to store the backfiles, and if need be, act as a stand-in database server. Its a 32bit machine, meaning, by right, 2GB Max/file.
Most of my database have no issue except for one, as it is 3.8GB before gzip, after which it becomes 1.2GB before I transport over.
Here's the question:
How to do a mysqldump such that the database dump will split itself into 2GB/file partition (before gzip)?
Oh a side question... I did try to decompress the 1.2GB file in my backup server (32bit), and it expanded to 3.8GB. The OS won't report any errors (I thought it would) ?
Then I tried to restore the file, and it ended up always reporting ERROR 1062 (23000) at line 6433: Duplicate entry 'swensen?fs' for key 2
I did a comparison, all tables are in. but I can't for sure tell whether the data is there as the database is highly active..
Someone suggested that the live database server mysql config maybe different thats why.. (oh and i took almost a day to restore the mysqldump onto the 32bit machine). Anyone can direct me to the mysql config? Is it only /etc/my.cnf
THANKS people!!! 
|
|
|
09-23-2005, 02:38 AM
|
#2
|
Member
Registered: Nov 2004
Location: Germany
Distribution: Debian Testing
Posts: 332
Rep:
|
Maybe split(1) is what you need.
|
|
|
09-23-2005, 04:35 AM
|
#3
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Original Poster
Rep:
|
can it be used in conjunction?
My current command is
mysqldump --opt -uxxxx -pyyyy -B dbname | gzip -9 > /home/user/db_files/dbname-$date.sql.gz
I read the help.. so if i specify in bytes (which will be 1000000000 for 1GB?) .. i need to add
split --bytes=1000000000 filename.sql ?
|
|
|
09-23-2005, 06:46 AM
|
#4
|
Member
Registered: Nov 2004
Location: Germany
Distribution: Debian Testing
Posts: 332
Rep:
|
Quote:
Originally posted by Swakoo
can it be used in conjunction?
|
You mean, in a pipe? Yes, but only at the end of a pipe, since it doesn't output to stdout (wouldn't make sense).
You would do it like:
mysqldump options | split -b 1000000000 - dbname-$date.sql-;
for f in dbname-$date.sql-* ; do gzip -9 $f ; done
|
|
|
10-05-2005, 11:59 PM
|
#5
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Original Poster
Rep:
|
Quote:
Originally posted by addy86
You mean, in a pipe? Yes, but only at the end of a pipe, since it doesn't output to stdout (wouldn't make sense).
You would do it like:
mysqldump options | split -b 1000000000 - dbname-$date.sql-;
for f in dbname-$date.sql-* ; do gzip -9 $f ; done
|
I don't understand... meaning the whole line is one command, including the FOR statement?
what do you mean by end of pipe since it doesn't output to stdout? sorry just trying to clarify as it is a live server and i don have other big database to try with...
how does the pipe works..?
|
|
|
10-12-2005, 05:06 AM
|
#6
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Original Poster
Rep:
|
command works liao  played around a bit..
didn't know can incoporate for loops in command ... interesting!
|
|
|
10-12-2005, 05:15 PM
|
#7
|
Senior Member
Registered: Dec 2004
Location: Lee, NH
Distribution: OpenSUSE, CentOS, RHEL
Posts: 1,794
Rep:
|
What filesystem are you using where the max file size is 4GB? Or, is this to save the files to DVD once backed up?
Remember: file size limitations has nothing to do with the instruction size your processor uses. I run a 32-bit Pentium 4 and yet on ReiserFS (or NTFS if I boot to Windows) I can generate files many gigabytes in size.
With that said, to answer your question: I'd pipe the dump through tar (or better yet, dar) and specify that the files be created in 3.9GB volumes.
Just FYI: 32-bit systems are not limited to 2GB file sizes, and 64-bit systems are not limited to 4GB file sizes. The "bitness" of file allocation is completely independent of the "bitness" of the CPU's instruction set, the CPU's address lines, and so forth.
|
|
|
10-13-2005, 08:41 PM
|
#8
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Original Poster
Rep:
|
Quote:
Originally posted by KimVette
What filesystem are you using where the max file size is 4GB? Or, is this to save the files to DVD once backed up?
Remember: file size limitations has nothing to do with the instruction size your processor uses. I run a 32-bit Pentium 4 and yet on ReiserFS (or NTFS if I boot to Windows) I can generate files many gigabytes in size.
With that said, to answer your question: I'd pipe the dump through tar (or better yet, dar) and specify that the files be created in 3.9GB volumes.
Just FYI: 32-bit systems are not limited to 2GB file sizes, and 64-bit systems are not limited to 4GB file sizes. The "bitness" of file allocation is completely independent of the "bitness" of the CPU's instruction set, the CPU's address lines, and so forth.
|
Hi there,
my impression had always been such, 32bit-2GB ; 64bit-4GB....
So you are recommmending using tar rather than gzip?
|
|
|
10-14-2005, 05:36 AM
|
#9
|
Senior Member
Registered: May 2004
Location: Hilliard, Ohio, USA
Distribution: Slackware, Kubuntu
Posts: 1,851
Rep:
|
Quote:
Originally posted by Swakoo
So you are recommmending using tar rather than gzip?
|
Tar contains no compression - just the ability to archive and split files. Tar literally stands for Tape ARchiver. You can use, in conjunction, tar and gzip to create a .tar.gz of your database, if compression is what you're after.
|
|
|
10-15-2005, 01:31 PM
|
#10
|
Senior Member
Registered: Dec 2004
Location: Lee, NH
Distribution: OpenSUSE, CentOS, RHEL
Posts: 1,794
Rep:
|
I recommend tar since tar is an archive program and gzip is purely compression (a bit different from pkzip/winzip which offer integrated archival/compression capability)
Also I'd recommend bzip over gzip because with bzip you can get to your data even if there are minor errors. gzip requires an error-free file in order to decompress any of it.
|
|
|
10-17-2005, 04:13 AM
|
#11
|
Member
Registered: Apr 2005
Distribution: Red Hat / Fedora / CentOS
Posts: 508
Original Poster
Rep:
|
hi guys...
tar comparing to split(), what should i know of the difference?
Currently I got it to work using split + gzip
I was considering abt bzip.. cant remember what was the difference that made me go gzip.. i think it was the compression... but you saying abt bzip works even if there are some errors (on compression i presume, or CRC errors?) is worth considering.. i will take note...
|
|
|
All times are GMT -5. The time now is 09:51 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|