Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Out of curiosity, why can't bad sectors be written to? Is it something that the OS is currently incapable of, or does the HDD itself prevent you from doing it?
Because the disk drive formatter board prevents it. What happens is that the address for the sector gets mapped to a replacement sector. Thus any attempted access to the bad sector address has the address replaced with the address of its replacement, and the bad sector isn't processed.
The formatter board has taken over the function of remapping bad sectors from the driver, making some errors impossible to recover from. A disk that has "too many" bad sectors cannot be recovered from anymore.
What used to be done was that the "manufacturers bad block" was copied into the driver on first access. This list was originally created when the drive was first formatted with sector headers/checksums (the original definition of the process of "formatting the drive"). The driver then used the replacement list itself, and could even expand the list as the driver detected additional errors. When the reserve list was used up the DRIVER would report an error. If the admin desired, the disk could be backed up, and reformatted using the driver function to reformat (the disk controller/disk formatter board had nothing do to with it). This allowed disks to be recovered, and during a verify pass errors could again be added to the bad block list - and the bad block sectors on the disk updated. This frequently returned disks to usability. It was even possible to expand the bad block list to extend the usable life of the disk.
When I first started working with computers (back around 1975 as an operator of a DEC System 10) disk drives would not be taken out of service unless there was a head crash (the disk read/write heads damaged, and thus the disk pack in use physically destroyed) or a spindle on a disk pack broke (destroyed both disk pack and drive). Otherwise, a disk pack would just be reformatted and the bad block list expanded. It was normal for there to be 10-20 errors due to media defects, but there could be over 100 soft errors added (mostly due to header checksum errors exhausting the retry/offset recovery). Such soft errors COULD be recovered by reformatting. Didn't mean they wouldn't happen again, but it could take a month or two first. If the system admin chose, they could extend the list for the drive... The only requirement for the list is that the list had to be stored at the beginning of the drive, and the reserve list only created during a format phase to designate the size of the replacement list. Once the disk format/verify was completed could a filesystem be put on it. It did mean that the available storage got slightly smaller as the replacement list expanded.
All of that went away when the formatter board got its own processor and embedded with the disk drive and disk pack. The resulting cartridge was faster (and a LOT smaller), and head crashes a rather rare occurrence since the enclosed disk prevents general contamination. But a good bit of flexibility was also lost.
Sometimes a low level (device command) reformat will recover the sector (this is actually recovering from a soft failure rather than a hard failure, but the drive can't tell the difference) and overwrite it. But if a surface defect has been finally detected, that won't happen.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Don't some modern hard drives have a "secure wipe" or is that only available on previously hardware encrypted (usually laptop) drives? I know the utility (Linux and Windows versions available) from my SSD manufacturer has a "secure wipe" option too though I've not researched what that does.
Oh, a legal word of caution (mostly paranoia though) for those in the UK using random data to wipe -- if the authorities can convince a judge that it is encrypted data on there you can go to prison for something like 5 years for not giving up the key and repeatedly be sent there until you can prove your innocence.
Oh, a legal word of caution (mostly paranoia though) for those in the UK using random data to wipe -- if the authorities can convince a judge that it is encrypted data on there you can go to prison for something like 5 years for not giving up the key and repeatedly be sent there until you can prove your innocence.
Well, then I guess it is better to use a PRNG like mersenne twister, because then you can prove that it is a PRNG by extracting the seed. It is not as secure, but should be good enough. I usually use /dev/zero, which should also be good enough.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by metaschima
Well, then I guess it is better to use a PRNG like mersenne twister, because then you can prove that it is a PRNG by extracting the seed. It is not as secure, but should be good enough. I usually use /dev/zero, which should also be good enough.
Personally, I would make do with using /dev/zero. However, if there is data which may be misused I suppose it's the trade between the two.
Personally, I have no data which I feel the need to delete apart from bank details -- I fear javascript more than an electron microscope attack.
I've heard a lot about using multiple passes of random data, but I've also heard that no information has ever been successfully recovered from a drive that's been zeroed out. Which is true?
The only issue is HOW the disk is being recovered, and who is doing the recovery.
For low expense, just making a dump of the disk sufficient - and zeroing the disk is sufficient to neutralize any recovery. Low expense would be around $1,500 - $5,000 as it usually needs to replace the formatter board with a special purpose board, lots of calibration, and a read.
For HIGH expense the recovery can use a magnetic microscope. If only one pass at reading is done, then some data could be recovered from a single zero pass. (The reason is that writing the same value takes less space than writing alternating values - thus some space at the end of a sector might retain old data; and HIGH expense can be around $15k).
For REALLY REALLY high expense, multiple passes can be made. And the price goes higher if they choose to remove the top layer of the recording media, and make multiple passes. Sometimes more data can be recovered. To cover this, multiple overwrites can/will reduce the data. But REALLY REALLY high means prices for a disk scan will be well over $15,000 PER SCAN. (It requires a cleanroom environment...)
With the newest disks, it is usually uneconomical to do this as almost no data is that valuable.
magnetic polarization leaves remnants of original state even after you change the magnetic polarization, thus recovery of data is still possible even after wiping (not with any gear you would have though, etc). multi-pass wipes with flipping the bits helps to stop this type of forensics work.
secure wipe is a HD controller function in most modern day HD's, but many times the mobo bios will block such function command to protect the HD from getting accidentally wiped. some mobo bios's have access to HD commands from the bios util itself, but its rare from what i have seen.
i believe the HD controller can be told to attempt to write all sectors, even the bad ones. a "bad sector" is usually marked when a read failure occurs.
the place to go for magnetic stuff is CMRR @ http://cmrr.ucsd.edu/ ,you can email the folks here and they usually have a good response time.
Last edited by Linux_Kidd; 05-15-2014 at 08:43 AM.
magnetic polarization leaves remnants of original state even after you change the magnetic polarization, thus recovery of data is still possible even after wiping (not with any gear you would have though, etc). multi-pass wipes with flipping the bits helps to stop this type of forensics work.
It has never been proven to work, but if you are worried about it, I would encrypt the data instead of doing multiple passes.
There are also brute force levers that destroy the disk. But the shredder is what I've seen used.
When I was contracted to the Navy, that was ALSO their solution. It required special wavers for contracts that had "return to vendor" requirements. The only time this wasn't done was when the disk was being reused within the same group, which allowed some projects to have a sizable storage rack composed of "recycled" disks...
Nice video... It looked like one of the drives tried to escape
It looked like the hard drives came out the bottom in fairly large size chunks... If you could get the chunks from one hard drive (instead of that whole box), couldn't you use the magnetic microscope on the platter remnants, and get some info from that? Or are the shredder "jaws" magnetized as well?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.