wiping HDD using /dev/urandom versus /dev/zero, a theoretical question
So, I see a lot of people recommending that you wipe a disk using /dev/urandom instead of /dev/zero for "maximum security".
What difference would it make ? All data is being overwritten in both cases, why would /dev/urandom be better than /dev/zero ? Assuming a worst case scenario I've heard that there may be a way to somehow see what a bit was previously set to, especially if the bit was NOT flipped, is this possible ? Quote:
I'm not sure how this would work, but what kind of transition would you be able to see ? 0 to 0 and 1 to 1 or 0 to 1 and 1 to 0 or both ? Well in either case the best thing to do is 1 wipe with /dev/one (doesn't exist) and 1 wipe with /dev/zero ... optimal, right ? Anyone know more about this ? or have some more links. I don't really understand everything they say in the paper above ... but I understand the highlighted bit. It also says: Quote:
http://www.dban.org/node/40 |
Another member, linus72, is experimenting with wiping/screwing-up a virtual drive to see what can be recovered in this thread:
http://www.linuxquestions.org/questi...cue-it-736194/ |
I think the info you have already pulled up does a good job of debunking the need for multiple overwrites.
Head-offsetting, residual magnetic image and resonance and other physical attacks may have been possible in the early days, but with the high density and techniques that modern storage devices use I believe it's going to be far less likely to succeed. Though it's still worth using urandom to fill a disk you're intending to use encryption on (so an attacker doesn't know what is empty space and what is used), when all you want to do is erase your disk, overwriting with any old junk is probably good enough, x'00', x'FF', x'AA', x'55', doesn't really matter. Reading from /dev/urandom is very, very slow. It'll take several days to overwrite a typically sized disk these days. Doing multiple passes from urandom would probably require you to be very paranoid indeed. |
The theory is that when you go from 1 to 0 it isn't 0, but a week signal. I think the long format in windows (format c:) sets every bit directly to zero. Therefor they can be recovered using analyzing tools. Not necessary software. sometimes you'll have to hand it over to a laboratory and they will be able to recover a lot of data. Not every thing, but often enough to reconstruct important documents etc.
If you overwrite with ones (A friend of mine made a kernel patch for /dev/one, but I don't know if he published it on Internet :P) and back to zero there still might be some background information. In practices this is impossible (today) to recover, but you will be able to get some words here and there. But if you use /dev/urandom, then you don't know for sure if there was a one or a zero before, and thus make the recovery hopeless. I read this some days ago. This was easy to understand and it isn't to long. It explains in a little more detail how this works and why /dev/urandom is better then /dev/one then /dev/zero. |
Quote:
Quote:
http://lists.ibiblio.org/pipermail/p...ch/000002.html I haven't test it tho. Thanks for the link, it explains things better. |
From what I've read so far, looks like I'll be sticking to 1 wipe with /dev/zero ...
|
I will probably do the same. Perhaps I will delete certain files with dban. To overwrite one time with '1' seems to be good enough for three-letter agencies so using /dev/urandom is a complete waste of time IMHO.
|
All times are GMT -5. The time now is 07:40 PM. |