Ok, first let me state that this may be a software question as it involves dd, it may be a general question as it involves NTFS, and it may be a hardware question as it involves /dev/hda.
I've got a laptop from which I would like to remove windows XP, but I want to make an image of the hard disk so that if there are problems I restore XP to the machine and send it back to the company who doesn't support Linux. I can't use any of the fancy programs because I don't have LInux installed on the machine and I don't use XP.
I booted off of the net, with the root of the filesystem hosted by my desktop with enough disk space to back up the 20 GB hard drive on the laptop, so "/backup_xp" below actually points over a 100Mb line to the hard drive on my desktop--there is no linux physically on the laptop yet.
I tried the following (well, a variant of the following, but essentially identical):
Code:
for (( i=0; i<80; i++)); do dd if=/dev/hda bs=268435456 count=1 \
skip=$i of=/backup_xp/winxp_b_$i; done
This works fine for the first 8, then gives me an error:
Code:
dd: /dev/hda: Invalid argument
This error is repeated 8 times, and then dd continues on its merry way for another 8 "blocks". Every other eight blocks works, then the next don't work.
The reason I use the above line is multi-fold; 1) 2GB filesize limit on the desktop, so I can't just dd if=/dev/hda of=/backup_xp/backup. 2) I use the large blocksize so I don't have to deal with too many pieces when I want to restore the hard drive, but I can't use anything much larger than that because I have 512 MB of RAM on the laptop, so the block size needs to be less than that (I like powers of two, what can I say?).
Does anyone have any insight as to why there are eight blocks (2GB) at a time that work and then that don't. It seems like it could be anything from a dd problem to the hard drive to NTFS to the NFS filesystem limitation. . .
I did a cursory search here on LQ, and on google, but wasn't able to find anything useful.
I suspect it's a 2GB limitation that's getting in the way, I just don't know where.
It's weird, though. If I start out like so:
Code:
for (( i=8; i<80; i++)); do dd if=/dev/hda bs=268435456 count=1 \
skip=$i of=/backup_xp/winxp_b_$i; done
It skips the FIRST 8 blocks this time, rather than running through blocks 8-15, and skipping 16-23, it still skips blocks 8-15; just like it did in the first command.
Weird.
P.S. I'm not necessarily looking for a better script, I know I can make better use of dd, and am, in fact, doing so in the way I'm mirroring my hard drive. The real reason I posted this is because I think it's weird that there is a skip of ~ 2GB after every 2GB. It doesn't make a lot of sense, and I consider it a bug in whatever program is causing this. I wonder, if there is a counter somewhere, maybe in dd, that rolls over (maybe to essentially -2GB) at the 2GB hit, and doesn't roll back to 0 for another 2GB.