LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (https://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   Trouble Making DD Image (https://www.linuxquestions.org/questions/linux-hardware-18/trouble-making-dd-image-568777/)

ssenuta 07-12-2007 11:08 AM

Trouble Making DD Image
 
I tried using the dd cmd to make a compressed image of my sata drive sda1 30G partition on my pata drive 60G hda3 backup partition but failed.

dd if=/dev/sda1 conv=noerror,sync | gzip > /mnt/backup/sda1.img.gz

The cmd did produce a 12.3G sda1.img.gz --But when I tried to unmount the pata hda3 (/mnt/backup) partition I got a "BUSY" message. Also, I used the cmd "ps -e" & discovered that the "dd" process was still running so I tried to kill it but it wouldn't die. --So I typed "shutdown -r now" but that failed & my system was froze. I had to power off.

Now, when I re-booted the bad shutdown caused my ext3 journal to recover & the machine booted up fine. However, when I checked my /mnt/backup/sda1.img.gz file, I discovered that the size was changed from 12.3G to 7.6G. so something was wrong.


Here is how I finally got the above "dd" cmd to work:

I ran the cmd the dumpe2fs on "ALL" my partitions & discovered that the feature "large file" was missing on all of them. So to correct this I used the cmd "dd if=/dev/zero of=large.img bs=4k count=64k" to create a large 3G file on "ALL" my partitions. Then I ran the cmd "e2fsck -f" on "ALL" the partitions(unmounted of course), deleted the large.img file on "ALL" partitions & rebooted. Now, when I ran the dumpe2fs cmd, it showed the feature "large file"

As a further precaution, I checked my kernel-2.6.20.11 .config & discovered that I did have large single file (LSF) enabled but not large block devices (LBD) so I enabled it too & re-built my kernel.

I finally booted my pata drive to init 3 using my new kernel and successfully created a sda1.img.gz. So, it seems that the linux large single file (LSF) support doesn't fully work unless you also configure your kernel with large block device (LBD) support too. However, I am no expert & just want to pass on my "dd" experience to others who might experience the same problem. Thank you all for your past support & for reading this post.

maroonbaboon 07-12-2007 08:46 PM

I am a bit mystified by this. According to the kernel docs the LBD and LSF options only come into play if you have a disk or single file larger than 2TB, and I figured I still have a couple of years before I need to worry about that. Even then not needed on 64 bit systems.

Are you sure this is related to your problem?

jschiwal 07-12-2007 09:09 PM

What was the filesystem on the pata drive?

Also, if you send the USR1 signal to dd, it will print out its progress.
Because you need root access to access the device, you need to use sudo to run the dd command as root.
You also need to use "sudo killall -s SIGUSR1 dd" to get the progress output.
Code:

sudo killall dd -s USR1
jschiwal@hpamd64:~> 2141905+0 records in
2141904+0 records out
1096654848 bytes (1.1 GB) copied, 138.023 s, 7.9 MB/s

jschiwal@hpamd64:~> sudo killall dd -s USR1
jschiwal@hpamd64:~> 2619857+0 records in
2619856+0 records out
1341366272 bytes (1.3 GB) copied, 164.08 s, 8.2 MB/s

The filesystem on the drive you are saving the backup on needs to be able to store files larger than 2GB. So don't save to a vfat external drive unless you also pipe the output through the split command to break up the output to more manageable slices. You can use cat to reassemble them and pipe the output through gunzip. You don't need to reassemble a large 10GB file. You could even use par2create to create parity files to protect the backup.

If this a backup after your initial installation? It is better to backup files instead of the entire partition. Although an initial image backup is fine. You might want to look at the kdar program. It makes routine backups easy. You can export a script from kdar that performs the same backup, and have it run daily in a cron job.

ssenuta 07-13-2007 09:40 AM

Trouble making a dd image file
 
No, I am not sure LBD is related to my failed "dd" backup. I guess I could have attempted to make my sda1.img.gz after ensuring that dumpe2fs was persistently showing the feature "large file" on every boot, but I didn't. I went & enabled LBD in the kernel before I ran the successful "dd" cmd. Sorry, but I didn't want to experience another freeze-up w/o a good backup clone available.

Also, I thought large files were defined as files > 2G & that is why I always include that feature in my kernels. Why do you thing dumpe2fs wouldn't show the "large file feature until I actually created a small 3G file?

maroonbaboon 07-13-2007 10:05 AM

Quote:

Originally Posted by ssenuta
Why do you thing dumpe2fs wouldn't show the "large file feature until I actually created a small 3G file?

This seems to be set as a warning to old (2.2 and before) kernels which could not handle files >4GB. See:

http://www.fs-driver.org/faq.html#large_file

So unrelated to the LSF kernel option.

ssenuta 07-13-2007 10:51 AM

Trouble making dd image file
 
Thanks for the tip about using the cmd. killall dd-s usr1. I'll have to read up on sending signals to the kernel. I only tried killall dd & kill [pid] on the dd process which didn't work.

Also, I did run the "dd" cmd as (su) root & all my PATA & SATA partitions (except my swap partitions of course) are ext3 with the default journal mode. My linux distribution is 32 bit Mandriva-2006

Now, can someone please verify if the following are valid commands to restore my sda1.img.gz to /dev/sda1:

1.) dd if=/mnt/backup/sda1.img.gz | gzip -d > /dev/sda1
2.) gzip -dc /mnt/backup/sda1.img.gz | dd of=/dev/sda1
3.) cat /mnt/backup/sda1.img.gz | gzip -d | dd of=/dev/sda1

I personally am leery of #3 but I saw it somewhere on the net & thought I'd run it by you guys. Thank you stan


All times are GMT -5. The time now is 03:28 AM.