Can dd be used to clone just the data on a partition ?
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Thanks Dave. I was just thinking of a way to keep a backup of the distro since I've gone to a lot of trouble configuring / installing the software and have a nice smoothing working OS in place. Obviously I would like a backup in case of emergencies. Would rsync be a better option as opposed to good old tar or cp ?
To quote Dave, "in a word, yes". Rsync will make a bit for bit copy while (possibly) compressing the data. The advantage is that tomorrow, next week, next month, you can do another rsync and the only thing that will be transferred are files that have been changed.
(1) I would use Gparted to resize the partition, which should be just the linux partition say hda1, to shrink it into say 2Gb large. Have it booted and verify everything is working. For a 850Mb LInux I would not expect it to have a swap partition.
(2) Since hdb is free to use I would temporarily move all the data I wish to keep into another partition in hda.
(3) I shall delete all the partitions in hdb and recreate hdb1 exactly the same size as hda1, down to the exact number of cylinder and sector and same partition type ID.
(4) I dd the partition across with command
dd if=/dev/hda1 of=/dev/sdb1 bs=32256
I expect my cloned Linux to boot same as hda if I remove hda and put hdb in its position.
If it is just the data just use any of the cp, tar or rsync. I use dd if I want to back up the boot sector which no other file-copying command would touch.
Thank you very much all and Saikee. A similar solution had crossed my mind but I did not want to have to go through the trouble of partitioning , resizing and more partitioning. However, I need to clone the linux including the boot record and you have re-assured me that the only method by which this can be achieved is via dd.
Would the following work ?
1. Resize hda1 ( Containing Linux OS) to 1 Gb
2. Partition hdb to give a 1.5Gb ( or would this have to be the same size as hda1) ext2/3 hdb1 and mount this partition on say /mnt/hdb1
3. Use the command
Thanks everyone for all your kind help. Your feedback has been invaluable and greatly appreciated in my linux learning curve !
I would like to become less and less reliant on GUI and do as much work on CLI ( the best way to learn about Linux IMHO), hence , my reluctance to use an X-based program. I thought I could use dd but as saikee pointed out I would have to "freeze" the hard drive size. Something which may not be practically feasible. Unfortunately I may revert to having to use Partimage.
Thanks for your help and advice. I have learnt a lot about the "dd" command in the last 24 hours !
You could instead pipe the output of dd through either gzip or bzip2. I once tried an experiment. I filled a partition and then delete these files. It didn't compress well because the delete files were being copied as well. Then I used dd to zero fill the partition. Next, I deleted these files and tried again. This time, the image was around the size of the drive usage. Restoring, just reverse the process. cat the file with zcat or bzcat and pipe the input to dd.
@jschiwal: Yeah, I tried doing the "dd piped through gzip" thing a few years ago too. Didn't work out too well, IIRC. I needed the image to fit on a CD and it ended-up nowhere near 700MB (it was something like three or four gigs IIRC - when there was way less than a gig in actual data). I didn't do the drive zeroing, though. It sounds like it could have made a huge difference. Do you basically copy the files to another partition, zero the original partition, then copy the files back? Or do you have a way to zero the unused parts of the partition without having to move the data?
@uncle-c: I think it's actually great that you use Partimage (or any of the bare metal recovery tools that are based on it) for this. dd is an awesome tool, but it's not the best tool for every job. That said, it would indeed be kinda fun to tinker with it and see how much can be squeezed out of it from the command-line for this type of application. I might do just that if I get a reply from jschiwal.
Distribution: Debian, plan to switch to compiling everything
Copy all of the files, using cp, and then copy the MBR with:
dd if=/dev/hdaX of=/dev/hdaY bs=SOMBR count=1
#/dev/hdaX is the partition it's currently on
#/dev/hdaY is the partition you want it on
#SOMBR is Size of MBR. This, for a floppy, is just the number 512. I'm not sure about a hard disk.
The above is untested but should work. Please be aware that it might destroy your data (but not your disk). So I advise making a backup.
When I ran the test, I did so on an ext3 image around 20GB is size. I filled it up with podcasts and then deleted them. Then I copied the files in /boot to this partition. The prezeroed test was around 15GB in size. The post zeroed test was around 20MB. I may have left 10MB unzeroed when I did my test. I probaby should have zeroed it with something like:
This example would zero the /boot partition on my desktop leaving the last 5 1K blocks unzeroed, which should be enough to allow for the added inode entry.
Then delete the zero.tmp file.
I was expecting bzip2 to have better compression, but in this test, gzip was better.
After the first image backup, I think using tar with the -g option would be better. After the first tar backup, only incremental backups are performed until you start a new cycle.
During a `--create' operation, specifies that the archive that
`tar' creates is a new GNU-format incremental backup, using
SNAPSHOT-FILE to determine which files to backup. With other
operations, informs `tar' that the archive is in incremental
format. *Note Incremental Dumps::.
The tar info manual has an example of tar'ing entire partitions, having the receiving end of the pipe untaring them in a subshell. You could easily modify this so that you could restore an entire partition over the network, using netcat for example:
$ (cd sourcedir; tar -cf - .) | (cd targetdir; tar -xf -)
You can avoid subshells by using `-C' option:
$ tar -C sourcedir -cf - . | tar -C targetdir -xf -
The command also works using short option forms:
$ (cd sourcedir; tar --create --file=- . ) \
| (cd targetdir; tar --extract --file=-)
$ tar --directory sourcedir --create --file=- . ) \
| tar --directory targetdir --extract --file=-
I think that external drives work the best for backups however. One of the DVD's containing the backup could become damaged. Also, cat'ing image slices is more difficult when the backup takes up several DVDs. The larger the image, the more likely a drive problem will cause a failure during the backup, and the more likely one of the DVDs will be damaged.
On a very full system, a tar backup of the /home directories may be too large for the tar backup to fit on a DVD or for the filesize limit of a Fat32 formatted external drive. In this case, you can pipe through the split program. To restore then, simply cat the slices together and pipe the output into the tar command.
I used a tar and split to backup my home directory when I performed a fresh upgrade install. The external drive was formatted in ext3. It wasn't empty so I didn't reformat it. I kept the slices reasonably sized ( 800MB - 1GB I don't remember exactly ) and used par2create to create parity files in case one of them were to become damaged. I was able to cat the files together in the reverse process. I could even produce a tar listing that way. I didn't restore all of files I had, but was able to restore just what I wanted back. ( Newer podcasts, pdf documents, konqueror web archives )
Using dd, these two options could be useful:
Skip BLOCKS `ibs'-byte blocks in the input file before copying.
Skip BLOCKS `obs'-byte blocks in the output file before copying.
For example, you could have slices, and include the starting block and the count in the filename. Then during restore, the filename would supply the skip & count block counts that the dd command would use to restore. This would allow restoring from several dvd's even if you restored them out of order.
A backup or restore will go a lot faster without compression. Some file types do not compress well anyway, and drives have gotten larger and cheaper. Saving time may be more important than saving bits. Especially for larger backups.
Some filesytems have a dump program that you can use to dump a partition's filesystem, backing up only the parts that are being used. One example is xfs_dump. This program can perform a dump on a live filesystem. See the readme files in /usr/share/doc/ for xfsprogs and the "man 5 xfs_*" man pages for more details.
Added bonus: Another handy use of dd. Generating a random preshared key:
dd if=/dev/random bs=32 count=1 2>/dev/null | od -t x1 | sed '$d;s/^[[:xdigit:]]* //;s/ //g' | tr -d '\n'; echo