Multiple HDD writes required to render data irretrievable?
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
why is smeezekitty's contribution unnacceptable, I think it is very fair. Overwriting data although lengthy is a service I offer professionally.
AIUI smeezekitty's technique does not address data written to blocks that have subsequently identified as bad and hence mapped out. The dd command, working inter alia through the hardware interface cannot write to such blocks so their contents remain as readable as the degree of actual badness permits. This could allow recovery of significant data.
I talked about this with an expert some time ago. He also said that recovering overwritten data was a myth. Maybe it was possible with very old disks, but nowadays the density in disks in so high, that it simply is impossible. If it was possible, don't you think the disk manufacturers would use it to increase disk space?
Utilities like shred can be bad in the sense that it can fool people to think a file is gone, when the contents could be in swap, journal files, automatic backups or at the end of sectors. You need to overwrite the entire disk, and that you cannot do unless you use a live cd or similar.
Instead of using dd, you can delete all partitions on a disk, make a new one that uses the entire disk and then use mkfs.ext2 with the -c option twice. Then everything is overwritten, and you also get the benefit of a health check of the disk. If no sectors are bad, use the disk for something else. If not, destroy and recycle it.
AIUI smeezekitty's technique does not address data written to blocks that have subsequently identified as bad and hence mapped out. The dd command, working inter alia through the hardware interface cannot write to such blocks so their contents remain as readable as the degree of actual badness permits. This could allow recovery of significant data.
Really!!
Are you saying that--if I read the whole disk with dd--I will get an output which does not include bad blocks, but still is in a contiguous sequence? This does not sound right to me---I had thought that fencing off of bad blocks was a function of the filesystem---which dd does not use.
Really!!
Are you saying that--if I read the whole disk with dd--I will get an output which does not include bad blocks, but still is in a contiguous sequence? This does not sound right to me---I had thought that fencing off of bad blocks was a function of the filesystem---which dd does not use.
I thought so too. But then, bad blocks are bad blocks ... you can't read them anyway, and you expect to get data from them ?
I thought so too. But then, bad blocks are bad blocks ... you can't read them anyway, and you expect to get data from them ?
AIUI, AFAIK (and other expressions indicating tentative knowledge) bad block mapping is done in hardware and not visible to the OS except when running HDD OEM utilities. This link seems to support that concept. A single 512 byte block could contain enough data to be significant; if bad blocks develop in clusters, more than 512 bytes of contiguous data could be preserved<delete>, hidden from the OS' file system calls and thus from the dd command. I guess they might be available via the HDD hardware driver, reading the firmware, but not via the file system calls</delete>.
EDIT:
Scrub the red text. Stupidly I was forgetting that the dd command does not access the file system but the block device files -- either for the whole HDD or for a partition.
It is impractical to manufacture HDDs without bad blocks. From this StorageReview page: "On modern hard disks, a small number of sectors are reserved as substitutes for any bad sectors discovered in the main data storage area. During testing, any bad sectors that are found on the disk are programmed into the controller. When the controller receives a read or write for one of these sectors, it uses its designated substitute instead, taken from the pool of extra reserves. This is called spare sectoring. In fact, some drives have entire spare tracks available, if they are needed. This is all done completely transparently to the user, and the net effect is that all of the drives of a given model have the exact same capacity and there are no visible errors. This means that the operating system never sees the bad areas, and therefore never reports "bad sectors"".
The testing mentioned above is part of the manufacturing process but blocks that were good during that testing may become defective over the lifetime of the drive. From the same page: "These will normally be detected either during a routine scan of the hard disk for errors (the easy way) or when a read error is encountered trying access a program or data file (the hard way). When this happens, it is possible to tell the system to avoid using that bad area of the disk. Again, this can be done two ways. At the high level, the operating system can be told to mark the area as bad and avoid it (creating "bad sector" reports at the operating system level.). Alternately, the disk itself can be told at a low level to remap the bad area and use one its spares instead".
From this StorageReview page: "Many drives are smart enough to realize that if a sector can only be read after retries, the chances are good that something bad may be happening to that sector, and the next time it is read it might not be recoverable. For this reason, the drive will usually do something when it has to use retries to read a sector (but usually not when ECC will correct the problem on the fly). What the drive does depends on how it is designed.
[snip some S.M.A.R.T. stuff]
Today's hard disks will also often take corrective action on their own if they detect that errors are occurring. The occasional difficulty reading a sector would typically be ignored as a random occurrence, but if multiple retries or other advanced error correction procedures were needed to read a sector, many drives would automatically mark the sector bad and relocate its contents to one of the drive's spare sectors. In doing so, the drive would avoid the possibility of whatever problem caused the trouble worsening, and thereby not allow the data to be read at all on the next attempt".
It is the above behaviour which means dd cannot be used to erase all data from an HDD. The bad blocks are still on the HDD, their contents may not have been erased (there's no reason for the HDD manufacturer to design their HDDs to do so) and they can no longer be addressed (any routine attempt to do so would be diverted to the appropriate spare sectors) but it may be possible to interface with the HDD firmware using special diagnostic commands and retrieve the contents of those sectors which could "only be read after retries".
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.