LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 02-09-2008, 05:26 PM   #436
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Original Poster
Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
Post Some Answers ...


Quote:
Originally Posted by On2ndThought View Post
Dear Mr. AwesomeMachine,

I am a bit surprised that a modern IDE HD would have different sector sizes depending on if it is plugged in directly to the Motherboard, or plugged in via USB cable. I would have thought that the physical sector size was primarily controlled by the circuit board on the HD. To my mind the HD circuitry controls the actual physical layout, and this would be true no matter what variety of cable (IDE or USB) was used to connect to the computer.
Quote:
Originally Posted by AwesomeMachine
USB hard drives use a virtual partition that dictates a virtual sector size.
Quote:
Originally Posted by On2nThought
I'd always been told that the biggest factor affecting data xfer rates to/from any HD was the mechanical limitations of how long it takes to pysically move the heads from one place on the HD to another. My USB HD (an off-the-shelf IDE drive stuck into a box containing a power supply and USB interface), has an internal cache of 16 MB. For that reason, I theorized that keeping that cache pretty much filled would minimize the write time to that HD. I thought that by doing so the write speed would be limited only by the mechanical limitations of the drive, because there would always be full pipeline of data ready/waiting to be written.

Brian
Quote:
Originally Posted by AwesomeMachine
First of all, dd nor any program directly pulls data from the on drive cache. The drive electronics uses the cache, and translates the contents into usable data for the machine, feeding it to the drive interface on the mobo, where it either is processed by the cpu, or the dma controller.

Also, the cache does not necessarily fill completely, empty completely, and then fill completely. The drive does a lot inside itself. An example would be: Some drives can begin reading anywhere in the track. The drive does not need to wait for the track to get to the beginning. This means data is being read out of order , so the drive logic needs to execute an algorithm to put the data in the order it was written.

It's like if you have you finger stationary above a rotating circle of 0-63 inches marked on the circumference. If you want to know how many inches are in the circle, You can wait until it rotates so zero is under your finger, let the drive rotate around to zero, counting each inch, and subtracting 1 at the end. Or, you can start counting at any old place, immediately, up to the same spot where you began, and figure out in your head that the sequence does not begin at 32, and end at 31, but starts at 0, and ends at 63.

That's easy for you to figure out, because you have learned how to count. But, drives can never learn anything. They need algorithms to rearrange the cache data so it is what's expected by the rest of the machine.

Also, caching relies heavily on guessing. Using a caching algorithm, the drive tries to guess what data near the read will be needed. In a computer system, it is always more likely data near the data read will be needed than data far away from the read. This applies to memory, cpu, and drives, video cards, and probably other things.

Since the probability is greater, loading data near other data immediately needed, into fast cache memory, is the correct thing to do more often than not.

It's like a brewery. It has a warehouse. It fills the warehouse with everything to brew beer. That's because the probability of a brewery needing things used to brew beer is greater than a brewery needing things to make bicycles. You might find a few things in the brewery warehouse that are also used in making bicycles, and no one can tell you where they came from, or what they are used for.

That's why the cache gets flushed of stale data after a certain number of clock cycles. Cache is arranged in blocks, and each block has attributes, one of which is a time stamp, but not the time humans use, but a relative time to the clock pin on the cpu.

As long as the warehouse is filled again, with things needed to brew beer, before it is empty, the factory keeps operating.

The device fills the cache as fast as it can, with data most likely needed soon, and the odds have been calculated by humans who write the caching algorithms, so it works. A processor reads memory into the level 2 cache. A processor runs at a much higher core frequency than ram. So, during the 300 clock cycles it takes ram to get ready for another read, the cpu fills the cpu level 1 cache from the cpu level 2 cache, and executes instructions at 300 times the speed of no on die cache.

the dd bs=4k is nowhere engraved in stone. Sometimes it's better, especially over an ethernet connection, to use bs=16065b, or 16065 sectors (b=512). This is equal to 1 cylinder, and intra machine data transfer for some reason works better with that block size. The way dd works is safe, but slow. It fills a read buffer, checks the data, fills a write buffer, checks the data, and writes it. The drive cache really isn't a critical factor.
This post isn't as detailed as an engineer would need to understand how these things work. Nevertheless, for Linux users it is at a proper level.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 02-09-2008, 06:21 PM   #437
AwesomeMachine
LQ Guru
 
Registered: Jan 2005
Location: USA and Italy
Distribution: Debian testing/sid; OpenSuSE; Fedora; Mint
Posts: 5,524

Original Poster
Rep: Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015Reputation: 1015
JZL240l-U,

A cylinder is 16065 sectors on in LBA addressing for most PCs - 255(0-254)Heads * 63(0-63)SectorsPerTrack = SectorsPerCylinder = 16065. File systems use blocks. Each block is usually more than 1 sector. There is no fast way to sterilize a drive. Dd is the best way, using /dev/zero. That will wipe well enough to preserve your privacy. Random characters aren't essential, and really aren't added security over zeroes.

Drive manufacturers all have utilities designed to write zeroes over their drives, Hdparm has a man page.
 
Old 02-10-2008, 07:26 AM   #438
saikee
Senior Member
 
Registered: Sep 2005
Location: Newcastle upon Tyne UK
Distribution: Any free distro.
Posts: 3,398
Blog Entries: 1

Rep: Reputation: 113Reputation: 113
I am a bit surprised by the widespread need to wipe clinically clean a hard disk, as though a banker is trying to remove every trace of his client information for safety reason or a fleeing terrorist trying to clean up the evidence.

For an average PC user the normal deletion of the entries in a partition table should clearly be adequate.

This is because once all the partitions have been deleted the hard disk would not be bootable and the Bios will be unable to use the hard disk,as there is no partition boundary defined, and must treat it as a raw disk, like directly from a new purchase.

Once a new set of partitions are created/introduced the data area will be overwritten and the hard disk is 100% functional as a new unit.

Without the previous partition table and the previous filing indexing system partly or wholly destroyed/overwritten it is a forensic job to re-assemble any of the data back. Thus wiping clean a hard disk, or zero-fill it etc is only necessary if one does not want any of the data recoverable. I doubt is this is the intention of every PC user.

On a practical level a complete removal of the partition table, which is a mere 64 bytes long, takes no more than a minute, say using fdisk or cfdisk terminal program, and the hard disk is as good as new in a reboot. What can possibly be gained by spending hours to make every bit to "0" in every byte in the hard disk, except to shorten its life?

It is true a complete destruction of the 64-byte partition table will not touch the interior of any of the the partitions. However the means to locating the position/boundary of each partition has also been removed. If another partition table is recreated, a quick format is performed to destroy the previous file indexing system and a new boot loader is implanted into the boot sector the disk should have very little risk of being affected by what was there before. This I would have thought would be sufficient for the majority of the PC users.

May be we should start to question the necessity of zero-fill/wipe clean hard disks to see if there are bomber makers among us?

Last edited by saikee; 02-10-2008 at 07:30 AM.
 
Old 02-10-2008, 03:04 PM   #439
On2ndThought
LQ Newbie
 
Registered: Apr 2007
Posts: 13

Rep: Reputation: 0
Thanks so much for the in-depth explanation Mr. Machine, you truly are *AWESOME*

<bowing to the Master!>

Just to add, my mistake was in naively thinking that the cache was roughly analogous to a FIFO stack. But I see that your understanding FAR exceeds my own, and also shows why a larger bs size is not always advantageous.

Thanks again!
Bill

Last edited by On2ndThought; 02-10-2008 at 03:07 PM. Reason: Fixed typos
 
Old 02-10-2008, 03:41 PM   #440
On2ndThought
LQ Newbie
 
Registered: Apr 2007
Posts: 13

Rep: Reputation: 0
Yes, some people really are fanatical about cleansing their HD's

Quote:
Originally Posted by saikee View Post
I am a bit surprised by the widespread need to wipe clinically clean a hard disk, as though a banker is trying to remove every trace of his client information for safety reason or a fleeing terrorist trying to clean up the evidence.

For an average PC user the normal deletion of the entries in a partition table should clearly be adequate.

This is because once all the partitions have been deleted the hard disk would not be bootable and the Bios will be unable to use the hard disk,as there is no partition boundary defined, and must treat it as a raw disk, like directly from a new purchase.

Once a new set of partitions are created/introduced the data area will be overwritten and the hard disk is 100% functional as a new unit.

Without the previous partition table and the previous filing indexing system partly or wholly destroyed/overwritten it is a forensic job to re-assemble any of the data back.
,,,
This I would have thought would be sufficient for the majority of the PC users.

May be we should start to question the necessity of zero-fill/wipe clean hard disks to see if there are bomber makers among us?
I've not built any bombs recently (lol), but I do recognize the desire for a complete wipe of the HD which does the best reasonable job of removing personal user data. On Windoze there is CCleaner, (and a handful of others) which will remove the garbage files that accumulate in tmp and cache folders. And these files can contain personal or sensitive data. Some of these programs can also be told to "wipe" the file multiple times in order to make it exponentially harder to recover if your comp/HD should fall into the hands of someone who wishes you ill. Additionally, there is an excellent program called Eraser, that will do a DOD level wipe of all unused space on your HD. What I have not found (yet), is programs offering this depth of security in the Linux world. (but granted, I've only recently started using Linux, and the programs may be available and I've simply not yet found them)

In a nutshell here is my point. Some people, for whatever their reasons may be, desire to feel an extra level of security about the information on their HD's. Some, such as myself, understand that "deleted" is not really "gone forever", and that a deleted file CAN be recovered with the right software. Furthermore, I recognize that even when parts of a file are overwritten by new files (or, as you point out, a partition table is deleted) it is still possible (though difficult) to recover data from file fragments. With this in mind, I find it comforting to have programs that will do DOD level (and beyond) wiping of files that I want to be GONE.

One last point, there are two issues here. First is cleansing a drive that you wish to format & resuse and/or give away. Second is maintaining a "clean" HD, in the event that your comp/HD should fall into the hands of someone who wishes you ill. The steps you point out, would be minimally sufficient for the first case, because if you are reusing or giving away the drive, removing the partition data will make it rather hard for the average user to recover any data. But for those of us who like to keep the drive "clean" while we continue to use it, your solution is not viable. That's why I use CCleaner, Eraser, and other prgs in Windoze, and why I wish I could find equivalent programs in the Linux world. And honestly, given the fact that security is such an integral part of the Linux philosophy, and how prolific the open source community is, I'm a little bit surprised that such software is not standard in most distros.

Sorry if this is somewhat off topic, but I was responding to the issue raised in this thread regarding the whole issue of the desire to cleanse a HD.

Respectfully,
Brian
 
Old 02-10-2008, 03:51 PM   #441
dive
Senior Member
 
Registered: Aug 2003
Location: UK
Distribution: Slackware
Posts: 3,467

Rep: Reputation: Disabled
I guess if you were selling an old hard drive on ebay for instance, you would want to make sure that it's cleaned.
 
Old 02-11-2008, 01:27 AM   #442
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Quote:
Originally Posted by AwesomeMachine View Post
...File systems use blocks. Each block is usually more than 1 sector.
Ahhhhhrg. That's where I was muddled. Yep, the blocks have 4096 Bytes. Thanks AwesomeMachine.

Have I got it right: One sector = 512 Bytes?

Quote:
Originally Posted by AwesomeMachine View Post
...There is no fast way to sterilize a drive. ... Random characters aren't essential, and really aren't added security over zeroes....
Agreed. I just wanted to point out that one can easier detect malware in a "sea" of zeros than in random noise. And that there is normally inaccessible space both at the beginning and end of the drive.

Last edited by JZL240I-U; 02-11-2008 at 08:39 AM.
 
Old 02-15-2008, 11:10 AM   #443
tomot
LQ Newbie
 
Registered: Feb 2008
Posts: 1

Rep: Reputation: 0
this the dd command a permanent change?

On my eeepc the SSD is hcd1, and the 2gb SDHC is sda1
I'm being asked to perform the following operation:

dd if=/dev/hdc1 of=/dev/sda1, this takes about 1 hr to perform, followed by, dd if=/dev/hdc of =/dev/sda bs=512 count=1

I'm following this step as part of a longer procedure in order to clone the OS from the SSD to SDSH.
So that in the end the eeepc can boot one OS from the SDHC. or another OS from the SSD.

Question: is this "dd" operation a permanent change to structure of the SDHC,
or does the SDHC revert back to its prior exisitence, for example through formatting, by FAT32 or perhaps to NTFS ?

TIA!
 
Old 02-20-2008, 08:44 AM   #444
jindalarpan
Member
 
Registered: Mar 2006
Posts: 94

Rep: Reputation: 15
dd command syntax

hi all

i having one hard disk with many partitions.
i want to create copy of that one partition ie /dev/hda3 to another one /dev/hda6.

for the same i have used the following command:

dd if=/dev/hda3 of=/dev/hda6


but its more that 3 hrs this command is still not completed. it there any parameter that need to be passed.

both the partition is of size 25GB.


thanks
 
Old 02-22-2008, 08:02 AM   #445
mambopoa
LQ Newbie
 
Registered: Feb 2008
Posts: 2

Rep: Reputation: 0
Great post!

Thanks!
 
Old 02-22-2008, 09:47 AM   #446
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,922
Blog Entries: 44

Rep: Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158Reputation: 3158
Hi,

Quote:
Originally Posted by saikee View Post
I am a bit surprised by the widespread need to wipe clinically clean a hard disk, as though a banker is trying to remove every trace of his client information for safety reason or a fleeing terrorist trying to clean up the evidence.

For an average PC user the normal deletion of the entries in a partition table should clearly be adequate.

This is because once all the partitions have been deleted the hard disk would not be bootable and the Bios will be unable to use the hard disk,as there is no partition boundary defined, and must treat it as a raw disk, like directly from a new purchase.

Once a new set of partitions are created/introduced the data area will be overwritten and the hard disk is 100% functional as a new unit.

Without the previous partition table and the previous filing indexing system partly or wholly destroyed/overwritten it is a forensic job to re-assemble any of the data back. Thus wiping clean a hard disk, or zero-fill it etc is only necessary if one does not want any of the data recoverable. I doubt is this is the intention of every PC user.

On a practical level a complete removal of the partition table, which is a mere 64 bytes long, takes no more than a minute, say using fdisk or cfdisk terminal program, and the hard disk is as good as new in a reboot. What can possibly be gained by spending hours to make every bit to "0" in every byte in the hard disk, except to shorten its life?

It is true a complete destruction of the 64-byte partition table will not touch the interior of any of the the partitions. However the means to locating the position/boundary of each partition has also been removed. If another partition table is recreated, a quick format is performed to destroy the previous file indexing system and a new boot loader is implanted into the boot sector the disk should have very little risk of being affected by what was there before. This I would have thought would be sufficient for the majority of the PC users.

May be we should start to question the necessity of zero-fill/wipe clean hard disks to see if there are bomber makers among us?
I like too make sure that any data is destroyed. Be it written form or bit form. Security is important in todays society. Identity theft is a real problem. The modern day pick pocket.

I rebuild systems as a means to keep my mind active. You would be surprised how many people just format the drive and think the data is gone. Huh! If I wanted to do something (un-ethical) it would be so easy.

Your trashing of the partition will not do anything other than no table. Then to format, no go. The data is still there. Not readily available but there for the knowledgeable user. Even shredding the drive data will not mean the data is gone. There will be residual effects that are traceable. This depends on how much time and if you really want to recover the data.

Some companies require the total physical destruction of any HDD systems to insure the data is indeed destroyed.

As for your last statement about bombers. The potential or possibility of you coming across a HDD that a terrorist has been using is rather slim.

Paranoia?
 
Old 02-22-2008, 10:55 AM   #447
fw12
Member
 
Registered: Mar 2006
Distribution: Fedora core, Ubuntu
Posts: 175

Rep: Reputation: 31
Quote:
Originally Posted by jindalarpan View Post
hi all

i having one hard disk with many partitions.
i want to create copy of that one partition ie /dev/hda3 to another one /dev/hda6.

for the same i have used the following command:

dd if=/dev/hda3 of=/dev/hda6

but its more that 3 hrs this command is still not completed. it there any parameter that need to be passed.

both the partition is of size 25GB.

thanks

dd if=/dev/hda3 bs=2M of=/dev/hda6
 
Old 02-29-2008, 06:49 PM   #448
Equinn
LQ Newbie
 
Registered: Feb 2008
Location: California
Distribution: Slackware 13.1
Posts: 24

Rep: Reputation: 0
Question DD for Extra Large Drives

Hello,

I am trying to use DD to image the hard drive on my Linux machine. The source and target are both 300GB. The problem is that it errors out at 137GB, which just so happens to be the maximum available when using 28 bit addressing. Does DD not know about 48 bit addressing? Is there a version out there that does?

I am using Slackware Linux.

Here is the command I used:

dd if=/dev/hdb of=/dev/sda

Here is the response I got:

dd: writing to '/dev/sda' : Input/output error
268435449+0 records in
268435448+0 records out
137438949376 bytes (137GB) copied, 10195.7 seconds, 13.5 Mb/s

I would appreciate any advice.

Thanks,
EQuinn (extreme newbie)
 
Old 03-04-2008, 01:26 AM   #449
JZL240I-U
Senior Member
 
Registered: Apr 2003
Location: Germany
Distribution: openSuSE Tumbleweed-KDE, Mint 21, MX-21, Manjaro
Posts: 4,629

Rep: Reputation: Disabled
Quote:
Originally Posted by Equinn View Post
...Does DD not know about 48 bit addressing? Is there a version out there that does?...
Do a
Code:
dd --version
That'll show the version of dd you use. (It would be nice if you used the user panel to fill in the info about your machine and software...).

I suspect though, that this is some effect of the addressing done via the BIOS. Have a look what it says there about using LBA and come back.
 
Old 03-04-2008, 08:22 AM   #450
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Equinn View Post
Does DD not know about 48 bit addressing?
I recently used DD to copy 250Gb with no problems. I selected bs=4M because I hoped it would make best use of the cache inside each drive.

I expect most people who use DD for very large copies use some high value for bs. So maybe there is some bug when the number of blocks is too high. At bs=4M my 250Gb is just 59605 blocks (I'm assuming 4M is 4*1024*1024 bytes).
 
  


Reply

Tags
backup, best, clonezilla, cloning, command, data, dd, disk, drive, duplicate, erase, explanation, formatting, ghost, hard, image, iso, memory, ping, popular, recover, recovery, rescue, search, security, stick, upgrade, usb, wipe


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Learn The DD command AwesomeMachine Linux - Newbie 17 08-17-2006 04:22 AM
The best way to learn? iz3r Programming 7 02-06-2005 11:00 PM
Best way to learn Linux from the command line patpawlowski Linux - General 2 03-01-2004 03:37 PM
I want to learn C. KptnKrill Programming 14 12-18-2003 01:03 PM
Best way to learn.... InEeDhElPlInUx Linux - Newbie 5 10-11-2003 01:02 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 03:11 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration