LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 12-31-2007, 01:20 AM   #421
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Original Poster
Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015

Thanks for the info, BT+1.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 12-31-2007, 05:13 AM   #422
sanjaykatiyar1
LQ Newbie
 
Registered: Mar 2007
Location: India
Distribution: Redhat, Fedora
Posts: 4

Rep: Reputation: 0
Thanks mates for wonderful article and replys.

Sanjay
 
Old 01-04-2008, 05:01 AM   #423
On2ndThought
LQ Newbie
 
Registered: Apr 2007
Posts: 13

Rep: Reputation: 0
Question Clone/Restore whole drives when drives are different sizes

First, THANK YOU SOOOO MUCH for taking the time to write such an awesome post that is phenomenally informative. I found your post after googling "Norton Ghost" and other related searches, trying to find some SIMPLE way to clone an entire HDD. I'd downloaded and tried to use Clonezilla, but for some reason (which I no longer recall), I wasn't able to figure out for sure what I was supposed to select and when. And given that I didn't want to reverse the order of things, I went searching for other options. That's when I found your tutorial, and suddenly it was all so SIMPLE! Just one ultrasimple (yet, ULTRAPOWERFUL) command in Linux and voila, the HDD was being cloned. So again, THANK you SOOOOO MUCH for all the work you've put into both writing the original posts, and in ALLLLL your help in following up to assist n00bies like myself.

Second, given as how this thread has grown to about 6 million pages, I've not taken the time to read EVERY post and response. So I ask that you please forgive me if I my questions have already been asked and answered.

Third, here is my situation. I received a laptop for xmas. It came with Vista, which I immediately wiped and replaced with WinXP. In the process I split the 80GB HDD down the middle (more or less), and decided to use the other half as a place to learn Linux. During the past year I've spent maybe a total of about 6-8 hours playing with live-CD's of various different distros (DSL, Puppy, SUSE, Kubuntu, Backtrack2), then, when I saw PCLinuxOS being highly praised on DistroWatch, I decided to make it my first real testbench.

This left me with an 80GB HDD split as follows (in round numbers)
10.00GB = hda1 = The Factory Installed Vista Restore Partition
33.25GB = hda2 = WinXP
02.00GB = hda5 = Linux Swap
07.75GB = hda6 = / (mount point)
20.75GB = hda7 = /home (mount point)

With so many partitions of various sizes, my preference was some way that would allow me to back-up/clone the whole HD at once, rather than having to manually re-partition my destination drive to match the same sizes/etc. And so when I read in your post that dd will do a bit-for-bit copy from one whole drive to another, including the partition information, I decided that was the best possible way for me to go.

Only one small problem, the destination drive I had was a brand new, just out of the box, 300GB USB HDD. So new, it hadn't even been formatted yet. So, in order for it to be recognized I had to do a quick format in XP. (Yes, I know that I could have accomplished the same result in Linux, but I don't yet know how to mount an unformatted drive)

(BTW, just to clarify one small thing, I didn't know if Linux would barf (like Windoze) if I tried to clone/restore to the partition that Linux is currently using as '/'. So, to be safe, for both the cloning and restoration, I booted from a copy of backtrack2 that I have on a 4GB USB flash ram. Since I'm booting a live distro from the USB flash, it then mounts my HDD's as /dev/sda (80GB) and /dev/sdc (300GB USB) so that will be reflected below))

Anyway, after reading the applicable parts of your post, I decided to use the following basic command format:

Quote:
dd if=/dev/sda of=/dev/sdc conv=notrunc,noerror
I typed that in and while it was running I started wondering about if I should try to optimize the 'bs' setting. I also started to worry, what will happen if dd "runs out of" data on the source drive, before reaching the end if the destination drive? I wondered if maybe I was making a really stupid mistake and dd would do something I didn't know about after reading the last bits of the source drive. With this in mind I decided to put a cap on just how much data dd would maximally read.

I know that an 80GB HDD is not really EXACTLY 85899345920 bytes (80*1024^3), but I figured, at the very least, I could avoid dd trying to run until it had filled the left over 300GB with data from some other source. In addition, taking into account that the USB drive has a 16MB internal cache, I tried several experiments on very large bs values, attempting to see if keeping the buffer pretty much full would improve my xfer speed. (when I'd ctrl-c'd out of the initial command, I discovered that by not setting any bs value, my xfer rate was abysmally low)

Ultimately I settled on the following:

Quote:
time dd if=/dev/sda count=10240 bs=8M of=/dev/sdc conv=notrunc,noerror
Although both the laptop and USB HDD are brand new, and USB 2,0 claims a data xfer rate of 480Mb/s (roughly 60MB/s), was rather disappointed to discover that my actual data xfer rate hovered right around 20MB/s. This remained true no matter if I set bs=4K, or various amounts up to 8M. As a result, copying the entire 80GB took a bit over 68 minutes. And by the time I was finished it was about 3am, so after doing a cursory confirmation that indicated the USB drive seemed to correctly duplicate the correct number of partitions and sizes, I put the whole thing away for the night.

Then tonight, I made a boneheaded mistake and totally fubar'd the display settings in my root account. Being utterly clueless about what .conf files to edit (or what changes to make once I found the right files), I decided to just restore from my back-up.


I used the following to perform the restoration because (so far as I know), only the partition holding '/' (/dev/sda6) is screwed up:

Quote:
time dd if=/dev/sdc6 of=/dev/sda6 bs=8M conv=notrunc,noerror
For what it's worth, doing a partition copy (rather than a drive copy), bumped my xfer speed up to 25.5MB/s, and completed it's task in just a bit over 5m24s.

While waiting for the restoration to complete, I started poking around in backtrack just to see what all was there. While doing so I came across KinfoCenter, and clicked on "Partitions" just out of curiosity.

In a nutshell, here is what it told me. The total size for each partition on the 300GB drive matched their counterparts on the 80GB drive. BUT, the space used/free on was VASTLY different between the two drives. And invariably the 300GB drive showed only about 60% - 80% the amount used as the 80GB drive. In other words:
Quote:

Part ........ Total Size .. Free Size .. (Used For)
/dev/sda1 ... 9,994 MB .... 3.166 MB ... (Factory Vista Restore)
/dev/sdc1 ... 9,994 MB .... 2.879 MB

/dev/sda2 .. 33.291 MB ... 13,850 MB ... (WinXP)
/dev/sdc2 .. 33,291 MB ... 14,929 MB

/dev/sda6 ... 7,787 MB .... 2,625 MB ... (Mount Point for '/')
/dev/sdc6 ... 7,767 MB .... 4,551 MB

/dev/sda7 .. 20,718 MB ... 20,451 MB ... (Mount point for '/home')
/dev/sdc7 .. 20,718 MB ... 20,451 MB
When I saw that I decided it was time to post here.

OK, so, yeah, I tend to be WAY to long winded, but I wanted you to have the complete picture of what all I'd done (and why), before I hit you with my questions.

So my questions are as follows.
  1. Given the command I used for cloning sda to sdc, shouldn't the free space be exactly the same on each counterpart? I mean, if there are (allegedly) the same number of files, and each file is the correct number of bytes, then wouldn't it just make sense that free space should also be the same?
  2. Is there a simple way I can "compare" each set of two partitions to make sure every file copied correctly? Again, I am a complete n00bie, so I may be asking a really stupid question, but is there some command vaguely similar to dd that will do a byte-for-byte comparison between any 2 sources?
  3. Presuming there is not a simple way to compare the counterpart, would you agree that the "backup" I made is very likely to be corrupt and non-viable?
  4. Presuming the "backup" is corrupt, what would you say is a likely cause? The fact that the drives have different physical internal geometries? The sheer difference in sizes? (Such as, "don't try to clone whole drives when the sizes don't match") Or, is it that I used bs-8M, when in actual fact, the physical sector on the drive is only 4k?
  5. Regardless of possible corruption, was the 'count' parameter superfluous while backing up the 80GB? In other words, would dd have automatically stopped on its own when it reached the 'end' of the 80GB drive, even though it had not reached the 'end' of the USB drive?
  6. Was it necessary to boot from another distro before copying the drive/partition where '/' is currently mounted? Or would it have been ok to clone/restore a drive while it is being used by the OS?

Well, that is MORE than enough for now. Again, I thank you soo very much for all that you've already done to help out folks like me. And also, again, sorry if I'm posting on an issue that has already been discussed.

Best Regards,
Brian
 
Old 01-04-2008, 05:12 AM   #424
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Here is an other incarnation of this thread:
http://www.linuxquestions.org/questi...ommand-366442/

Some of your answers are right close to the top, On2ndThought, but I don't want to anticipate the true master.

Btw. why did you create a second thread, AwesomeMachine?
 
Old 01-04-2008, 06:18 AM   #425
noisebleed
Member
 
Registered: Feb 2007
Location: Porto, Portugal
Distribution: Gentoo
Posts: 41

Rep: Reputation: 15
The best dd article on the web. Thanks
 
Old 01-08-2008, 10:56 AM   #426
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Original Poster
Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
Some Misunderstandings

On2ndThought,

The KDE partition reporting tool is not the best. I would suggest:

fdisk -l /dev/hda (or sda), etc.

Every file system has a mount point. '/' is A mount point. /home is also a mount point if /home is a separate partition. Each partition can hold one file system (i.e. ext3, xfs, reiserfs, NTFS, FAT32, etc.) Linux understands many file systems. I personally use ext3 for /boot, and make that 100 MB and the first primary partition. I make an xfs partition for '/', and a separate xfs partition for /home. I also make swap, but the last partition. I usually make /home and swap in logical partitions.

USB hard drives have much of the actual work normally done by the operating system done in the hard drive. It is usually seamless, but you have discovered limitations in how seamless it can be. USB sectors on a large drive are larger than internal drive sectors of 512 bytes. This means every file consumes more space on the USB drive.

Every file on a partition has to end somewhere. That last sector is only partially used. The rest is called slack space, or wasted space. With 512 byte sectors, each file can theoretically waste a maximum of 511 bytes. File system block sizes are another issue. On a CD, sector size is 2k. USB drives vary, but it is not uncommon to have 4k -8k sectors.

Your throughput on the USB 2.0 interface is typical. 480 Mb/s is not a sustained transfer rate. Between 9 Mb/s and 38 Mb/s is about right. The 480 figure is derived from how fast data travels to the interface, not how fast it can travel through the memory i/o and disk i/o using dd. I would suggest a block size no larger than 4k for what you are doing. I would also suggest reading the entire OP..

If you take these things into consideration, I'm sure you will see that everything is ok. You can run e2fsck on the USB partitions. Rarely would dd fail, AND e2fsck report a good filesystem.
 
Old 01-09-2008, 02:02 AM   #427
mujahed_khan
LQ Newbie
 
Registered: Oct 2007
Posts: 15

Rep: Reputation: 0
Thanks a lot man. as i am newbie here in linux these commands are really going to help me out. thanks once again.
 
Old 01-09-2008, 03:42 AM   #428
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Original Poster
Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
JZL240I-U,

This is the second thread. I quickly realized with the first thread that I needed the first post space, and the first reply, for a total of 50,000 characters, in order to have room for expansion. When I created this thread, I copy and pasted the first thread, and I replied immediately, securing another 25,000 characters of space immediately after the first post. I had no idea of the number of people who would use the thread, or that so many sites would have linked to it. I only knew there was no good dd documentation, and especially not that newbies could understand. God helped me a lot along the way, leading me to hereto unknown uses of dd.

I think the whole idea of dd, even the screwy command line, is cool. It's like having a Bible, or a guitar. You never run out of things to do with it, and you never get bored, except waiting for it to finish copying a large partition.
 
Old 01-27-2008, 09:06 PM   #429
On2ndThought
LQ Newbie
 
Registered: Apr 2007
Posts: 13

Rep: Reputation: 0
Smile Thanks so much for getting back to me...

Quote:
Originally Posted by AwesomeMachine View Post
On2ndThought,

The KDE partition reporting tool is not the best. I would suggest:

fdisk -l /dev/hda (or sda), etc.

Every file system has a mount point. '/' is A mount point. /home is also a mount point if /home is a separate partition. ...
Dear Mr. AwesomeMachine,

First, thank you so much for taking the time to reply clearly, concisely, and informatively.

Second, my apologies for not getting back here sooner. Been sorta busy around the old homestead lately.

Third, thank you for explaining the point about how the sectors on USB hard drives can be larger than on internal drives. This is key to explaining why the amount of pysical space used is different for the various partitions on the external USB vs internal drives. ...

(Although, completely unrelated to dd, I am a bit surprised that a modern IDE HD would have different sector sizes depending on if it is plugged in directly to the Motherboard, or plugged in via USB cable. I would have thought that the physical sector size was primarily controlled by the circuit board on the HD. To my mind the HD circuitry controls the actual physical layout, and this would be true no matter what variety of cable (IDE or USB) was used to connect to the computer.)

Fourth, I did read the entire OP, but focused my attention primarily on those parts which directly addressed copying one entire HD to another. What I did not do (and, admittedly I should have), was read all 422 previous follow-up posts/questions/responses. No doubt if I had, I would have found the answers to most of my questions. My apologies to you for not taking the time to do so.

Fifth, thank you for suggestions on how to use fdisk and e2fsck to accurately check the drive sizes, and the integrety of the resultant partitions.

Sixth, yes, I know that 480 Mb/s is not the "sustained" data rate. I recall that many years ago some devices advertised a "burst" rate, and the actual rate one should expect was always substantially lower. So I didn't really expect a sustained rate of 480 Mb/s (60 MB/s), but I did expect something more than just a third of that.

Finally, just out of mild curiosity ... I'd always been told that the biggest factor affecting data xfer rates to/from any HD was the mechanical limitations of how long it takes to pysically move the heads from one place on the HD to another. (And, of course this is especially true when one is accessing files on a heavily fragmented drive)

My USB HD (an off-the-shelf IDE drive stuck into a box containing a power supply and USB interface), has an internal cache of 16 MB. For that reason, I theorized that keeping that cache pretty much filled would minimize the write time to that HD. I thought that by doing so the write speed would be limited only by the mechanical limitations of the drive, because there would always be full pipeline of data ready/waiting to be written.

Of course, I understand that it also takes time to read 8 MB from the internal drive. But here again, if one is reading (and writing) an 8 MB burst of contiguous data, then I would have thought this would result in the absolute minimal possible mechanical limitations.

For these two reason, I thought it seemed far better to read/send 10240 bursts of 8 MB, than 20971520 micro-bursts of 4 KB. Especially given the large cache on the receiving HD.

Yet, you still advocate using a relatively small bs of 4 KB. Of course I defer to your vastly greater wisdom and experience, but it's unclear to me why so many small reads are preferred over a fewer number of large reads. Would you care to assist me to become as enlightened as yourself?

Sincerely,
Brian

Last edited by On2ndThought; 01-27-2008 at 09:16 PM. Reason: Correcting minor typos, clarifications.
 
Old 01-28-2008, 10:16 AM   #430
adam.rb
LQ Newbie
 
Registered: Jan 2008
Posts: 3

Rep: Reputation: 0
Hello, I've been using dd for a few years now, but haven't found a good solution
to this problem: securely erasing a hard drive efficiently. I recently cloned a
500 GB SATA disk to another 500 GB SATA disk. Took me 2.5 hours, copying data at
70+ MB/s to start and averaging about 60 MB/s. Now I want to securely erase the first
disk. I would be happy with one random data pass (if it was quick, maybe I'd do
2 or 3). Anyway, the linux commands 'wipe' and 'shred' are verrry slow (probably
very good, too), and dd with /dev/random is extremely slow and /dev/urandom gets
me ~ 3 MB/s (still talking days for one pass here). Even /dev/zero is slow. So
how can I speed things up here? If nothing else works, I suppose I could copy 50
GB of random data to the other hard drive and copy this back to ten 50 GB
partitions on the drive I want to wipe.
Any better ideas?
 
Old 01-28-2008, 10:22 AM   #431
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Never tried it myself: http://dban.sourceforge.net/ You might search for more there...
 
Old 01-28-2008, 10:59 AM   #432
adam.rb
LQ Newbie
 
Registered: Jan 2008
Posts: 3

Rep: Reputation: 0
thanks for the reply, JZ- yeah, there is DBAN and another 10 wipe utilities out there and one of them might work well, but does anybody have any dd voodoo that would do the trick here?
 
Old 02-01-2008, 09:25 AM   #433
adam.rb
LQ Newbie
 
Registered: Jan 2008
Posts: 3

Rep: Reputation: 0
OK, went with the /dev/urandom command, one pass. Not enough for the truly paranoid or those with something worth erasing, but a week of nights running dd at 3.4 MB/s is enough for me. Good luck to anyone else with this problem.
 
Old 02-04-2008, 02:04 AM   #434
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Well, it would not be my choice. Think: Random bits are used to obscure encrypted data e.g. in a TrueCrypt container. If you just want to erase your hd you have to flip bits repeatedly from 1 to 0 and back. What you have certainly done is thoroughly erasing for any "normal" user. But I can't help you with your wish for the use of dd for a fast erasure, sorry.

Last edited by JZL240I-U; 02-04-2008 at 02:07 AM.
 
Old 02-08-2008, 10:05 AM   #435
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Quote:
Originally Posted by AwesomeMachine View Post
...If you are just curious about what might be on you disk drive, or what an MBR looks like, or maybe what is at the very end of your disk:
Code:
dd if=/dev/sda count=1 | hexdump -C 
Will show you sector 1, or the MBR. The bootstrap code and partition table are in the MBR....
I seem to remember that the bootloader takes the first 448 bytes and the next 64 bytes are the partition table (448+64=512). And the rest of the first cylinder always stays empty for the reason AwesomeMachine stated: part of a cylinder can't belong to a partition.

Am I right in assuming that
Code:
dd if=/some/file of=/dev/hdx bs=512 skip =1 count=16064 {or less}
can hide sensitive data. Vice versa unauthorized data could be hidden there and it might be wise to fill it from /dev/zero for easy and fast control whether anything is there that doesn't belong to ones setup?

Quote:
Originally Posted by AwesomeMachine View Post
...There are 63 sectors per cylinder, and 255 heads per cylinder. Then there is a total cylinder count for the disk. You multiply out 512x63x255=bytes per cylinder. 63x255=sectors per cylinder. ... This writes the last 5102 sectors to myfile. Launch midnight commander (mc) to view the file. If there is something in there, you do not need it for anything. In this case you would write over it with random characters:
Code:
dd if=/dev/urandom of=/dev/sda bs=512 seek=234436545 
Will overwrite the 5102 surplus sectors on our 120 GB Seagate drive.
Question: is the sector size always 512? Your numbers and arguments suggest so, but I seem to remember when I created my file system that there was a message like "created file system in 4096 Byte Sectors. Is that something different?

I would suggest one takes /dev/zero in stead of /dev/urandom to fill that space. It is much easier to see if there is something hidden (i.e. not zero) which doesn't belong there. When the pattern of uniform zeros is broken all is clear. To discern a pattern in random noise from /dev/urandom is much harder. One might even write a small script with a while loop (read while (0) or something like that to regularly check) . The gap at the end of my hd is >5,7 MB plenty enough to hide nasty things...

Last edited by JZL240I-U; 02-08-2008 at 10:09 AM.
 
  


Reply

Tags
backup, best, clonezilla, cloning, command, data, dd, disk, drive, duplicate, erase, explanation, formatting, ghost, hard, image, iso, memory, ping, popular, recover, recovery, rescue, search, security, stick, upgrade, usb, wipe



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Learn The DD command AwesomeMachine Linux - Newbie 17 08-17-2006 04:22 AM
The best way to learn? iz3r Programming 7 02-06-2005 11:00 PM
Best way to learn Linux from the command line patpawlowski Linux - General 2 03-01-2004 03:37 PM
I want to learn C. KptnKrill Programming 14 12-18-2003 01:03 PM
Best way to learn.... InEeDhElPlInUx Linux - Newbie 5 10-11-2003 01:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 07:58 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration