LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   wiping HDD using /dev/urandom versus /dev/zero, a theoretical question (https://www.linuxquestions.org/questions/linux-general-1/wiping-hdd-using-dev-urandom-versus-dev-zero-a-theoretical-question-736357/)

H_TeXMeX_H 06-29-2009 05:13 AM

wiping HDD using /dev/urandom versus /dev/zero, a theoretical question
 
So, I see a lot of people recommending that you wipe a disk using /dev/urandom instead of /dev/zero for "maximum security".

What difference would it make ? All data is being overwritten in both cases, why would /dev/urandom be better than /dev/zero ?

Assuming a worst case scenario I've heard that there may be a way to somehow see what a bit was previously set to, especially if the bit was NOT flipped, is this possible ?

Quote:

The general concept behind an overwriting scheme is to flip each magnetic domain on the disk back and forth as much as possible (this is the basic idea behind degaussing) without writing the same pattern twice in a row. If the data was encoded directly, we could simply choose the desired overwrite pattern of ones and zeroes and write it repeatedly. However, disks generally use some form of run-length limited (RLL) encoding, so that the adjacent ones won't be written. This encoding is used to ensure that transitions aren't placed too closely together, or too far apart, which would mean the drive would lose track of where it was in the data.
http://www.cs.auckland.ac.nz/~pgut00...ecure_del.html

I'm not sure how this would work, but what kind of transition would you be able to see ? 0 to 0 and 1 to 1 or 0 to 1 and 1 to 0 or both ? Well in either case the best thing to do is 1 wipe with /dev/one (doesn't exist) and 1 wipe with /dev/zero ... optimal, right ?

Anyone know more about this ? or have some more links. I don't really understand everything they say in the paper above ... but I understand the highlighted bit.

It also says:

Quote:

In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now.

Looking at this from the other point of view, with the ever-increasing data density on disk platters and a corresponding reduction in feature size and use of exotic techniques to record data on the medium, it's unlikely that anything can be recovered from any recent drive except perhaps a single level via basic error-cancelling techniques. In particular the drives in use at the time that this paper was originally written have mostly fallen out of use, so the methods that applied specifically to the older, lower-density technology don't apply any more. Conversely, with modern high-density drives, even if you've got 10KB of sensitive data on a drive and can't erase it with 100% certainty, the chances of an adversary being able to find the erased traces of that 10KB in 80GB of other erased traces are close to zero.

Another point that a number of readers seem to have missed is that this paper doesn't present a data-recovery solution but a data-deletion solution. In other words it points out in its problem statement that there is a potential risk, and then the body of the paper explores the means of mitigating that risk.
There's also another paper here:
http://www.dban.org/node/40

brianL 06-29-2009 05:34 AM

Another member, linus72, is experimenting with wiping/screwing-up a virtual drive to see what can be recovered in this thread:
http://www.linuxquestions.org/questi...cue-it-736194/

GazL 06-29-2009 05:47 AM

I think the info you have already pulled up does a good job of debunking the need for multiple overwrites.

Head-offsetting, residual magnetic image and resonance and other physical attacks may have been possible in the early days, but with the high density and techniques that modern storage devices use I believe it's going to be far less likely to succeed.

Though it's still worth using urandom to fill a disk you're intending to use encryption on (so an attacker doesn't know what is empty space and what is used), when all you want to do is erase your disk, overwriting with any old junk is probably good enough, x'00', x'FF', x'AA', x'55', doesn't really matter.

Reading from /dev/urandom is very, very slow. It'll take several days to overwrite a typically sized disk these days. Doing multiple passes from urandom would probably require you to be very paranoid indeed.

Dinithion 06-29-2009 05:48 AM

The theory is that when you go from 1 to 0 it isn't 0, but a week signal. I think the long format in windows (format c:) sets every bit directly to zero. Therefor they can be recovered using analyzing tools. Not necessary software. sometimes you'll have to hand it over to a laboratory and they will be able to recover a lot of data. Not every thing, but often enough to reconstruct important documents etc.

If you overwrite with ones (A friend of mine made a kernel patch for /dev/one, but I don't know if he published it on Internet :P) and back to zero there still might be some background information. In practices this is impossible (today) to recover, but you will be able to get some words here and there.

But if you use /dev/urandom, then you don't know for sure if there was a one or a zero before, and thus make the recovery hopeless.

I read this some days ago. This was easy to understand and it isn't to long. It explains in a little more detail how this works and why /dev/urandom is better then /dev/one then /dev/zero.

H_TeXMeX_H 06-29-2009 06:00 AM

Quote:

Originally Posted by GazL (Post 3589838)
Reading from /dev/urandom is very, very slow. It'll take several days to overwrite a typically sized disk these days. Doing multiple passes from urandom would probably require you to be very paranoid indeed.

Yeah I know that's why I don't like using it.

Quote:

Originally Posted by Dinithion (Post 3589839)
If you overwrite with ones (A friend of mine made a kernel patch for /dev/one, but I don't know if he published it on Internet :P) and back to zero there still might be some background information. In practices this is impossible (today) to recover, but you will be able to get some words here and there.

There seems to be one posted here:
http://lists.ibiblio.org/pipermail/p...ch/000002.html
I haven't test it tho.
Thanks for the link, it explains things better.

H_TeXMeX_H 06-29-2009 06:12 AM

From what I've read so far, looks like I'll be sticking to 1 wipe with /dev/zero ...

Dinithion 06-29-2009 06:55 AM

I will probably do the same. Perhaps I will delete certain files with dban. To overwrite one time with '1' seems to be good enough for three-letter agencies so using /dev/urandom is a complete waste of time IMHO.


All times are GMT -5. The time now is 07:40 PM.