LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   A faster way to wipe free space? (https://www.linuxquestions.org/questions/linux-software-2/a-faster-way-to-wipe-free-space-701435/)

taylorkh 02-01-2009 10:08 AM

A faster way to wipe free space?
 
On my Windows PC I can use PGP's wipe utility to securely wipe free space on the disk and do it in a reasonable time. A one pass wipe of 130 GB on a 320 GB SATA drive takes about 1 hour. On the same drive on my Linux box - same drive model and about the same processor - wiping the same amount of space with sfill (from secure-delete) failed to complete a single pass in 8 hours. It was still working.

Can anyone recommend a Linux product which will wipe free space in a little more timely manner?

TIA,

Ken

p.s. I have tried using dd to copy /dev/urandom to a file to fill the unused space. This again takes an unacceptable amount of time. Filling a fixed size file with /dev/urandom (I have tried various sizes) then making enough copies of the file to fill the disk is a little faster but still not up to what PGP can do.

jschiwal 02-01-2009 10:32 AM

I haven't heard of sfill. I have used df to get the number of blocks available in a filesystem to zero out free space. This made a dd image of a partition compress much better. You could change it to use dd if=/dev/urandom instead. Simply use the amount of free space from df to determine the "count" to use in df, and make the block size the same.
E.G. sudo dd if=/dev/urandom of=/clearme bs=512 count=60594431
where the count is based on the output of "df --blocksize 512 /". In another shell you can query the progress by using "kill" to send the "SIGUSR1" signal to the dd process.

Did you clear the free space while the computer was running? It might run faster if you ran it from a rescue disk or live distro so that other processes aren't writing to the disk at the same time.

If the reading from /dev/urandom is the slow down, you could create a file that is a fraction of the free space containing random bytes and then try cat'ing it into a fifo in a loop. In a subshell, the dd command could read it from the fifo file. For this to work, you may need to create a number of fifo files.

Using multiple copies of a file increases redundancy and could make the process unsecure. I don't know how random the pattern that PGP uses. Windows doesn't have a good entropy source, and they may be repeating a pattern.

The method I used would be better for zeroing out free space. Writing to free space doesn't wipe the slack space of other files.

You really need to know how the program works to know whether it does what you think it does.

taylorkh 02-01-2009 11:02 AM

Thanks jschiwal.

I have used dd to zero fill empty space by a script containing
Quote:

cat /dev/zero > ~/Desktop/zero.fill;sync;sleep 1;sync;rm -f ~/Desktop/zero.fill;
It works reasonably quickly. Doing the same with /dev/urandom takes an impossibly long time.

The drive in question contains a single file system which is used for data storage as a Samba share point. I am not copying any data to or from it with Samba while attempting to run the wipe.

The PGP utility, according to the documentation, first wipes slack space and then the rest of the free space using "highly sophisticated patterns" whatever they are. No doubt better than just 0s.

I think sfill is basically a wrapper around the dd process.

Ken

pixellany 02-01-2009 11:08 AM

I'm barely off the turnip truck in these matters, but......

I would think that (with an optimum block size) dd would be as fast as anything. Interesting idea about using df to see what really needs to be wiped.

For secure wipe, do several passes with random data and zeros, and writing to the whole disk---regardless of what df or anything else says. Better yet, use DBAN:
http://www.dban.org/

taylorkh 02-01-2009 12:52 PM

Thanks pixellany. I overlooked the optimized block size. I will give that a try. Unless that is significantly faster than what I have tried... I could copy the data to another drive, remove the drive from the Linux box, install it in the Windows box, use PGP then remove and reinstall it in the Linux box and still save a day or two :-(

My goal here is not do do a complete nuke. I just want to routinely do a wipe of the free space on the drive. On the Windows box I wipe the free space on all 3 partitions on a nightly basis - 1 pass. I figure that over time that is as good as doing 7 passes once a week.

Ken

H_TeXMeX_H 02-01-2009 01:10 PM

Try using 'dd' with a larger block size 'bs', and use '/dev/zero' as the 'if'.

I am surprised anyone would recommend using '/dev/random', you do realize how that is generated ... it doesn't contain as much random data as you may think, and really it's designed to be used as a seed for a generator not as random data to be written, it would run out quick. Besides, nobody except maybe the CIA or FBI will be able to get the data back after zeroing like this ...

H_TeXMeX_H 02-01-2009 01:10 PM

Internal server error as I posted = double post :(

jschiwal 02-01-2009 02:37 PM

Quote:

Originally Posted by H_TeXMeX_H (Post 3428305)
Try using 'dd' with a larger block size 'bs', and use '/dev/zero' as the 'if'.

I am surprised anyone would recommend using '/dev/random', you do realize how that is generated ... it doesn't contain as much random data as you may think, and really it's designed to be used as a seed for a generator not as random data to be written, it would run out quick. Besides, nobody except maybe the CIA or FBI will be able to get the data back after zeroing like this ...

You may be confusing /dev/random and /dev/urandom. /dev/random will run out quickly.

If you were to prepare an encrypted filesytem, you would use /dev/urandom first on the entire drive.

Scorr 02-01-2009 03:42 PM

Take a magnet to it! Worked on my cash card. =[

taylorkh 02-01-2009 05:17 PM

It seems that block size is the key. I tried bs=512 and in 2 hours was able to write only 14GB from /dev/urandom. bs=4096 did 5 GB in about half an hour. I am now trying other values and have a little progress script running so I can plot the file growth over time.

This thread started with a whine about the sfill utility which I just tried yesterday. Getting back onto this problem after a couple of months I find that I have several scripts sitting around from earlier attempts. Perhaps if I get the block size optimized I will be in business.

Thanks all and I will post some results once I get them from my optimization experiments.

Ken

H_TeXMeX_H 02-02-2009 07:11 AM

Quote:

Originally Posted by jschiwal (Post 3428367)
You may be confusing /dev/random and /dev/urandom. /dev/random will run out quickly.

If you were to prepare an encrypted filesytem, you would use /dev/urandom first on the entire drive.

Oh, oops, I wasn't actually confusing them, I just didn't read right, you said urandom, but I thought it was random. Well, technically I also forgot about urandom :)

almatic 02-02-2009 09:52 AM

You may wanna check the manpage for sfill as you could see there, that it uses 38 passes (gutman method). You can however customize the behaviour (and the security) down to a 1-pass overwrite (sfill -ll), which is what you requested.

taylorkh 02-04-2009 12:26 PM

I did in fact use the -ll option with sfill. I just now tried it on a file system with 4 GB free. It created the wipe file in perhaps 20 minutes then just sat there for the next half hour. Who knows. On the other hand I put together this script which seems to work quite quickly. It filled 84 GB in 27 minutes. Perhaps not as good as other methods but I can schedule it to run on a routine basis.
Quote:

#!/bin/bash
#
# start clean, remove any files from prior attempt
#
rm ~/Desktop/random.txt
rm /data2/random.fil
echo Starting wipe > ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
#
# create a 10 MiB file of random characters
#
dd if=/dev/urandom of=~/Desktop/random.txt bs=1024 count=10240
echo Source file created >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
#
# is there space left on the target file system?
#
SP=$(df -H /data2 | grep -v '^Filesystem' | awk '{ print $5}'| cut -d'%' -f1)
#
# now fill up the target file system with random rubbish
#
while true; do
if [ $SP -lt 100 ]; then
cat ~/Desktop/random.txt >> /data2/random.fil
else
echo Filesystem full >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
rm -f /data2/random.fil
echo wipe file deleted >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
break
fi
SP=$(df -H /data2 | grep -v '^Filesystem' | awk '{ print $5}'| cut -d'%' -f1)
done

erikalm 04-19-2009 10:21 AM

/dev/zero is faster, /dev/urandom is better!
 
Hello,

I've performed a little test, comparing /dev/zero to /dev/urandom, with different bs values. Looking solely at performance in speed the result clearly favors going with /dev/zero. The type of in file (zero or urandom) makes a lot of difference, whereas the block size makes almost none. The difference observed can be due to other things going on in the system - I have not performed this test on a system doing nothing more than the test.

I used a 3GHz dual core AMD with 6 GB of memory, a SATAII software raid1 disk mounted with LVM.
Code:

dd                                        time
if=                bs=                MB/s        real        user        sys
/dev/zero        1048576                64.5        2m34s        0m0s        0m25s
/dev/zero        1024                61.3        2m42s        0m1s        1m17s
/dev/zero        524288                60.9        2m43s        0m0s        0m42s

/dev/urandom        1048576                11.1        14m56s        0m0s        14m37s
/dev/urandom        1024                10.4        15m58s        0m1s        15m23s

Even if you have another set up than I, you should get about the same differences in speed between zero and urandom.

So, given that zero is so much faster than urandom I had to go out there and figure out why you'd wipe a drive with random data at all. I mean my first thought was, hey, it's magnetism, if we wipe it quickly with zeros we're able to do it many times, and that'll do the trick. The only difference between zeros and random would be that someone could tell you'd wiped the drive or not, and only Jack Bauer or James Bond would need to hide that fact... right?

Wrong! Reading up a little on disk wiping and data recovery (http://www.cs.auckland.ac.nz/~pgut00...ecure_del.html) tells another story. It is, in fact, a bad idea to wipe the drive with just zeroes or just ones. However, the author of above paper, also states (in the epilogue) that modern drives won't need more than a couple of random wipes before it gets practically impossible to recover data from them. The 36 pass Gutman wipe was invented in a time when disk density was way lower than it is today, and further more, it is designed to cover a number of different storage techniques, so, according to Gutman himself (same link, epilogue) the 36 pass wipe was never intended for practical use unless you had virtually no idea what drives you had or how old they were...

That said, I've amended the script I use to wipe free space on my drives to look like this (in fact, after all is said and done, this is the original version of said script!):

Code:

#!/bin/bash
FSLIST="/var/share /var/files /var/backup/mirror"
SCRATCH_FILE="DELETE_ME"

for FS in $FSLIST; do
        set +e +u;
        dd if=/dev/urandom of="${FS}/${SCRATCH_FILE}";
        sync; sync;
        rm "${FS}/${SCRATCH_FILE}";
        sync; sync;
done

You put the path to the file systems you wish to wipe in the FSLIST-variable, separated by a space. The program then creates a file in the root of each of these systems, in turn, and fills it with data from urandom. Once the drive is full dd will fail and return control to the script. The sync command forces any data still in buffers to the disk. After the file system has been filled, the scratch file is deleted and the next file system is taken care of.

As you can see from the above experiment the urandom method is about five times as slow as the zero method, but given the information from Gutman's paper, using zeros is just a waste of your time. If you have deleted data you feel needs to be kept out of the hands of a third party by using a free space wipe then you should go with random wipes.

/Erik


All times are GMT -5. The time now is 05:58 AM.