LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Can dd be used to clone just the data on a partition ? (https://www.linuxquestions.org/questions/linux-newbie-8/can-dd-be-used-to-clone-just-the-data-on-a-partition-595285/)

uncle-c 10-28-2007 02:34 PM

Can dd be used to clone just the data on a partition ?
 
Hi there ,
Here is the prob.

HDA : 20Gb - 850 Mb is taken up by a linux distro and the rest is free.

HDB : 13.6 Gig - ext3 file system, all free.

Could I use the dd command to clone /dev/hda onto /dev/hdbX/backup but only so it copies the 850Mb and not the whole 20Gb ?

What I will probably do is to partition HDB and create a 1Gb partition to hold 850Mb ( if the above scenario is possible ).

Cheers,
Uncle

ilikejam 10-28-2007 02:36 PM

In a word, no.

rsync is probably the tool of choice.

Dave

uncle-c 10-28-2007 03:13 PM

Thanks Dave. I was just thinking of a way to keep a backup of the distro since I've gone to a lot of trouble configuring / installing the software and have a nice smoothing working OS in place. Obviously I would like a backup in case of emergencies. Would rsync be a better option as opposed to good old tar or cp ?

ilikejam 10-28-2007 03:17 PM

In this case I don't think rsync has any particular advantages over 'cp -a'.

If you're doing this periodically, though, rsync would be significantly faster after the first run.

Dave

Jim44 10-28-2007 03:17 PM

To quote Dave, "in a word, yes". Rsync will make a bit for bit copy while (possibly) compressing the data. The advantage is that tomorrow, next week, next month, you can do another rsync and the only thing that will be transferred are files that have been changed.

Jim.

saikee 10-28-2007 05:22 PM

I think the answer is a BIG yes.

Here are the steps I would achieve it.

(1) I would use Gparted to resize the partition, which should be just the linux partition say hda1, to shrink it into say 2Gb large. Have it booted and verify everything is working. For a 850Mb LInux I would not expect it to have a swap partition.

(2) Since hdb is free to use I would temporarily move all the data I wish to keep into another partition in hda.

(3) I shall delete all the partitions in hdb and recreate hdb1 exactly the same size as hda1, down to the exact number of cylinder and sector and same partition type ID.

(4) I dd the partition across with command
Code:

dd if=/dev/hda1 of=/dev/sdb1 bs=32256
I expect my cloned Linux to boot same as hda if I remove hda and put hdb in its position.

If it is just the data just use any of the cp, tar or rsync. I use dd if I want to back up the boot sector which no other file-copying command would touch.

uncle-c 10-28-2007 06:09 PM

Thank you very much all and Saikee. A similar solution had crossed my mind but I did not want to have to go through the trouble of partitioning , resizing and more partitioning. However, I need to clone the linux including the boot record and you have re-assured me that the only method by which this can be achieved is via dd.

Would the following work ?

1. Resize hda1 ( Containing Linux OS) to 1 Gb
2. Partition hdb to give a 1.5Gb ( or would this have to be the same size as hda1) ext2/3 hdb1 and mount this partition on say /mnt/hdb1
3. Use the command
Code:

dd if=/dev/hda1 of=/mnt/hdb1/linux.backup bs=32256
This would give me a file / clone on hdb1 called "linux.backup" of size 1Gb.

If the above scenario were possible could the file "linux.backup" be burned / copied onto cdr /dvdr / usb stick for safe keeping ?

Cheers,
Uncle

win32sux 10-28-2007 06:37 PM

I'm curious as to why you don't use something like Mondo for this.

Bare metal recovery seems to me like the right tool for the job (nothing personal against dd, of course).

saikee 10-28-2007 07:14 PM

uncle-c,

I think your scheme will work. You do have to freeze the hda1 partition size though, otherwise the backup file would not work when you restore it.

Boow 10-28-2007 08:25 PM

just use partimage its on knoppix cd

uncle-c 10-29-2007 04:04 AM

Thanks everyone for all your kind help. Your feedback has been invaluable and greatly appreciated in my linux learning curve !
I would like to become less and less reliant on GUI and do as much work on CLI ( the best way to learn about Linux IMHO), hence , my reluctance to use an X-based program. I thought I could use dd but as saikee pointed out I would have to "freeze" the hard drive size. Something which may not be practically feasible. Unfortunately I may revert to having to use Partimage.
Thanks for your help and advice. I have learnt a lot about the "dd" command in the last 24 hours !

All good wishes,
Uncle-C

jschiwal 10-29-2007 04:39 AM

You could instead pipe the output of dd through either gzip or bzip2. I once tried an experiment. I filled a partition and then delete these files. It didn't compress well because the delete files were being copied as well. Then I used dd to zero fill the partition. Next, I deleted these files and tried again. This time, the image was around the size of the drive usage. Restoring, just reverse the process. cat the file with zcat or bzcat and pipe the input to dd.

win32sux 10-29-2007 08:02 AM

@jschiwal: Yeah, I tried doing the "dd piped through gzip" thing a few years ago too. Didn't work out too well, IIRC. I needed the image to fit on a CD and it ended-up nowhere near 700MB (it was something like three or four gigs IIRC - when there was way less than a gig in actual data). I didn't do the drive zeroing, though. It sounds like it could have made a huge difference. Do you basically copy the files to another partition, zero the original partition, then copy the files back? Or do you have a way to zero the unused parts of the partition without having to move the data?

@uncle-c: I think it's actually great that you use Partimage (or any of the bare metal recovery tools that are based on it) for this. dd is an awesome tool, but it's not the best tool for every job. That said, it would indeed be kinda fun to tinker with it and see how much can be squeezed out of it from the command-line for this type of application. I might do just that if I get a reply from jschiwal. :)

jamesstanley 10-29-2007 08:04 AM

Copy all of the files, using cp, and then copy the MBR with:

Code:

dd if=/dev/hdaX of=/dev/hdaY bs=SOMBR count=1
#/dev/hdaX is the partition it's currently on
#/dev/hdaY is the partition you want it on
#SOMBR is Size of MBR. This, for a floppy, is just the number 512. I'm not sure about a hard disk.

The above is untested but should work. Please be aware that it might destroy your data (but not your disk). So I advise making a backup.

jschiwal 10-29-2007 06:36 PM

When I ran the test, I did so on an ext3 image around 20GB is size. I filled it up with podcasts and then deleted them. Then I copied the files in /boot to this partition. The prezeroed test was around 15GB in size. The post zeroed test was around 20MB. I may have left 10MB unzeroed when I did my test. I probaby should have zeroed it with something like:
Code:

dd if=/dev/zero of=zero.tmp bs=1024 cout=$(df /dev/sdb1 | awk '/dev/ {print $4-1}')
This example would zero the /boot partition on my desktop leaving the last 5 1K blocks unzeroed, which should be enough to allow for the added inode entry.
Then delete the zero.tmp file.

I was expecting bzip2 to have better compression, but in this test, gzip was better.

After the first image backup, I think using tar with the -g option would be better. After the first tar backup, only incremental backups are performed until you start a new cycle.
Quote:

`-g SNAPSHOT-FILE'
During a `--create' operation, specifies that the archive that
`tar' creates is a new GNU-format incremental backup, using
SNAPSHOT-FILE to determine which files to backup. With other
operations, informs `tar' that the archive is in incremental
format. *Note Incremental Dumps::.
The tar info manual has an example of tar'ing entire partitions, having the receiving end of the pipe untaring them in a subshell. You could easily modify this so that you could restore an entire partition over the network, using netcat for example:
Quote:

$ (cd sourcedir; tar -cf - .) | (cd targetdir; tar -xf -)

You can avoid subshells by using `-C' option:

$ tar -C sourcedir -cf - . | tar -C targetdir -xf -

The command also works using short option forms:

$ (cd sourcedir; tar --create --file=- . ) \
| (cd targetdir; tar --extract --file=-)
# Or:
$ tar --directory sourcedir --create --file=- . ) \
| tar --directory targetdir --extract --file=-
I think that external drives work the best for backups however. One of the DVD's containing the backup could become damaged. Also, cat'ing image slices is more difficult when the backup takes up several DVDs. The larger the image, the more likely a drive problem will cause a failure during the backup, and the more likely one of the DVDs will be damaged.

On a very full system, a tar backup of the /home directories may be too large for the tar backup to fit on a DVD or for the filesize limit of a Fat32 formatted external drive. In this case, you can pipe through the split program. To restore then, simply cat the slices together and pipe the output into the tar command.

I used a tar and split to backup my home directory when I performed a fresh upgrade install. The external drive was formatted in ext3. It wasn't empty so I didn't reformat it. I kept the slices reasonably sized ( 800MB - 1GB I don't remember exactly ) and used par2create to create parity files in case one of them were to become damaged. I was able to cat the files together in the reverse process. I could even produce a tar listing that way. I didn't restore all of files I had, but was able to restore just what I wanted back. ( Newer podcasts, pdf documents, konqueror web archives )

Using dd, these two options could be useful:
Quote:

`skip=BLOCKS'
Skip BLOCKS `ibs'-byte blocks in the input file before copying.

`seek=BLOCKS'
Skip BLOCKS `obs'-byte blocks in the output file before copying.
For example, you could have slices, and include the starting block and the count in the filename. Then during restore, the filename would supply the skip & count block counts that the dd command would use to restore. This would allow restoring from several dvd's even if you restored them out of order.

---

A backup or restore will go a lot faster without compression. Some file types do not compress well anyway, and drives have gotten larger and cheaper. Saving time may be more important than saving bits. Especially for larger backups.

---

Some filesytems have a dump program that you can use to dump a partition's filesystem, backing up only the parts that are being used. One example is xfs_dump. This program can perform a dump on a live filesystem. See the readme files in /usr/share/doc/ for xfsprogs and the "man 5 xfs_*" man pages for more details.

----

Added bonus: Another handy use of dd. Generating a random preshared key:
Code:

dd if=/dev/random bs=32 count=1 2>/dev/null | od -t x1 | sed  '$d;s/^[[:xdigit:]]* //;s/ //g' | tr -d '\n'; echo


All times are GMT -5. The time now is 05:12 AM.