Welcome to the most active Linux Forum on the web.
Go Back > Forums > Linux Forums > Linux - Software
User Name
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.


  Search this Thread
Old 02-25-2005, 05:32 PM   #1
Registered: Oct 2004
Posts: 309

Rep: Reputation: 30
using dd to backup?

i'm trying to backup my windows NTFS partition using a knoppix disk, a secondary hard drive and the dd command. i've been using this command: "dd if=/dev/hda | gzip > /mnt/hdb1/hda1bak.img.gz", which i've seen on the wiki here, i have a few questions/problems that i'm hoping you guys could help me with:

1. i've seen several examples and they seem to change the bs to 512, 1024, etc. without saying why. do i need to change the bs depending on the drive i'm grabbing? for example if my NTFS drive was formatted with 1024 cluster sizes, would i have to set the bs to 1024?

2. also, some descriptions have said the bs was the number of bytes read at a time and other say the number of blocks read at at time. and what is the bs measured in? bytes, kilobytes, megabytes?

3. why does dd take so long? i did a zero-fill on my 160 gig hard drive with this command "dd if=/dev/zero of=/dev/hda bs=1M" and it took eleven hours to finish. also, when i try to grab an image of my NTFS partition(only 40 gigs).

4. i tried to increase the bs size so that it might go faster. i incresed it from the default to 3M, so the command i ran was "dd if=/dev/hda1 bs=3M | gzip > /mnt/hdb1/hda1bak.img.gz". this took a very long time to finish running and when it finished it gave me an error about the filesize being too large.

will somebody help me plz?

Last edited by slinky2004; 02-25-2005 at 06:14 PM.
Old 02-26-2005, 02:52 PM   #2
Registered: Feb 2005
Posts: 42

Rep: Reputation: 15
dd's default block size is 512 bytes per block. Try this to verify it:
dd if=/dev/zero of=foo count=1

`ls -l foo` should show it's 512 bytes long

The reason why things are taking so long is probably because you're not using an optimal block size.

Increasing the block size doesn't necessarily make dd go faster. What you want to do is to have the block size correspond to one cylinder of the hard disk. This will minimize the number of seeks that each head on the hard disk has to perform.

You should be able to get the disk parameters (C/H/S) from the manufacturer. You can also try to get the disk parameters from hdparm, but these might be misleading.

To verify it, you can try the tiobench or other benchmarks.

For example:
/sbin/hdparm -g /dev/hda
geometry = 2584/240/63, sectors = 39070080, start = 0

If one belived that I really had 240 heads on this device, I'd do the following, for 1 cylinder at a time:

In order to do transfers 1 cylinder at a time, I would do the following
dd if=/dev/hda of=/dev/null bs=15120

15120 = 240 * 63



Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
backup help sridhar11 Debian 1 08-30-2005 11:40 AM
backup LuckyABA Linux - Software 1 01-22-2005 01:51 PM
can I backup a root disk and boot from the new backup disk linuxbkp Linux - Enterprise 3 10-15-2004 07:42 PM
Selective backup to CDRW, or other backup methods. trekk Linux - Software 1 11-03-2003 03:46 PM
backup and update the backup file doris Linux - General 4 08-24-2002 08:26 PM > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 05:19 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration