LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 02-01-2009, 10:08 AM   #1
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Rep: Reputation: 174Reputation: 174
A faster way to wipe free space?


On my Windows PC I can use PGP's wipe utility to securely wipe free space on the disk and do it in a reasonable time. A one pass wipe of 130 GB on a 320 GB SATA drive takes about 1 hour. On the same drive on my Linux box - same drive model and about the same processor - wiping the same amount of space with sfill (from secure-delete) failed to complete a single pass in 8 hours. It was still working.

Can anyone recommend a Linux product which will wipe free space in a little more timely manner?

TIA,

Ken

p.s. I have tried using dd to copy /dev/urandom to a file to fill the unused space. This again takes an unacceptable amount of time. Filling a fixed size file with /dev/urandom (I have tried various sizes) then making enough copies of the file to fill the disk is a little faster but still not up to what PGP can do.
 
Old 02-01-2009, 10:32 AM   #2
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 681Reputation: 681Reputation: 681Reputation: 681Reputation: 681Reputation: 681
I haven't heard of sfill. I have used df to get the number of blocks available in a filesystem to zero out free space. This made a dd image of a partition compress much better. You could change it to use dd if=/dev/urandom instead. Simply use the amount of free space from df to determine the "count" to use in df, and make the block size the same.
E.G. sudo dd if=/dev/urandom of=/clearme bs=512 count=60594431
where the count is based on the output of "df --blocksize 512 /". In another shell you can query the progress by using "kill" to send the "SIGUSR1" signal to the dd process.

Did you clear the free space while the computer was running? It might run faster if you ran it from a rescue disk or live distro so that other processes aren't writing to the disk at the same time.

If the reading from /dev/urandom is the slow down, you could create a file that is a fraction of the free space containing random bytes and then try cat'ing it into a fifo in a loop. In a subshell, the dd command could read it from the fifo file. For this to work, you may need to create a number of fifo files.

Using multiple copies of a file increases redundancy and could make the process unsecure. I don't know how random the pattern that PGP uses. Windows doesn't have a good entropy source, and they may be repeating a pattern.

The method I used would be better for zeroing out free space. Writing to free space doesn't wipe the slack space of other files.

You really need to know how the program works to know whether it does what you think it does.

Last edited by jschiwal; 02-01-2009 at 10:34 AM.
 
Old 02-01-2009, 11:02 AM   #3
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
Thanks jschiwal.

I have used dd to zero fill empty space by a script containing
Quote:
cat /dev/zero > ~/Desktop/zero.fill;sync;sleep 1;sync;rm -f ~/Desktop/zero.fill;
It works reasonably quickly. Doing the same with /dev/urandom takes an impossibly long time.

The drive in question contains a single file system which is used for data storage as a Samba share point. I am not copying any data to or from it with Samba while attempting to run the wipe.

The PGP utility, according to the documentation, first wipes slack space and then the rest of the free space using "highly sophisticated patterns" whatever they are. No doubt better than just 0s.

I think sfill is basically a wrapper around the dd process.

Ken
 
Old 02-01-2009, 11:08 AM   #4
pixellany
LQ Veteran
 
Registered: Nov 2005
Location: Annapolis, MD
Distribution: Mint
Posts: 17,809

Rep: Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743Reputation: 743
I'm barely off the turnip truck in these matters, but......

I would think that (with an optimum block size) dd would be as fast as anything. Interesting idea about using df to see what really needs to be wiped.

For secure wipe, do several passes with random data and zeros, and writing to the whole disk---regardless of what df or anything else says. Better yet, use DBAN:
http://www.dban.org/
 
Old 02-01-2009, 12:52 PM   #5
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
Thanks pixellany. I overlooked the optimized block size. I will give that a try. Unless that is significantly faster than what I have tried... I could copy the data to another drive, remove the drive from the Linux box, install it in the Windows box, use PGP then remove and reinstall it in the Linux box and still save a day or two :-(

My goal here is not do do a complete nuke. I just want to routinely do a wipe of the free space on the drive. On the Windows box I wipe the free space on all 3 partitions on a nightly basis - 1 pass. I figure that over time that is as good as doing 7 passes once a week.

Ken
 
Old 02-01-2009, 01:10 PM   #6
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
Try using 'dd' with a larger block size 'bs', and use '/dev/zero' as the 'if'.

I am surprised anyone would recommend using '/dev/random', you do realize how that is generated ... it doesn't contain as much random data as you may think, and really it's designed to be used as a seed for a generator not as random data to be written, it would run out quick. Besides, nobody except maybe the CIA or FBI will be able to get the data back after zeroing like this ...
 
Old 02-01-2009, 01:10 PM   #7
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
Internal server error as I posted = double post

Last edited by H_TeXMeX_H; 02-01-2009 at 01:13 PM.
 
Old 02-01-2009, 02:37 PM   #8
jschiwal
LQ Guru
 
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733

Rep: Reputation: 681Reputation: 681Reputation: 681Reputation: 681Reputation: 681Reputation: 681
Quote:
Originally Posted by H_TeXMeX_H View Post
Try using 'dd' with a larger block size 'bs', and use '/dev/zero' as the 'if'.

I am surprised anyone would recommend using '/dev/random', you do realize how that is generated ... it doesn't contain as much random data as you may think, and really it's designed to be used as a seed for a generator not as random data to be written, it would run out quick. Besides, nobody except maybe the CIA or FBI will be able to get the data back after zeroing like this ...
You may be confusing /dev/random and /dev/urandom. /dev/random will run out quickly.

If you were to prepare an encrypted filesytem, you would use /dev/urandom first on the entire drive.

Last edited by jschiwal; 02-01-2009 at 02:38 PM.
 
Old 02-01-2009, 03:42 PM   #9
Scorr
LQ Newbie
 
Registered: Feb 2009
Posts: 6

Rep: Reputation: 0
Take a magnet to it! Worked on my cash card. =[
 
Old 02-01-2009, 05:17 PM   #10
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
It seems that block size is the key. I tried bs=512 and in 2 hours was able to write only 14GB from /dev/urandom. bs=4096 did 5 GB in about half an hour. I am now trying other values and have a little progress script running so I can plot the file growth over time.

This thread started with a whine about the sfill utility which I just tried yesterday. Getting back onto this problem after a couple of months I find that I have several scripts sitting around from earlier attempts. Perhaps if I get the block size optimized I will be in business.

Thanks all and I will post some results once I get them from my optimization experiments.

Ken
 
Old 02-02-2009, 07:11 AM   #11
H_TeXMeX_H
LQ Guru
 
Registered: Oct 2005
Location: $RANDOM
Distribution: slackware64
Posts: 12,928
Blog Entries: 2

Rep: Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301Reputation: 1301
Quote:
Originally Posted by jschiwal View Post
You may be confusing /dev/random and /dev/urandom. /dev/random will run out quickly.

If you were to prepare an encrypted filesytem, you would use /dev/urandom first on the entire drive.
Oh, oops, I wasn't actually confusing them, I just didn't read right, you said urandom, but I thought it was random. Well, technically I also forgot about urandom
 
Old 02-02-2009, 09:52 AM   #12
almatic
Member
 
Registered: Mar 2007
Distribution: Debian
Posts: 547

Rep: Reputation: 67
You may wanna check the manpage for sfill as you could see there, that it uses 38 passes (gutman method). You can however customize the behaviour (and the security) down to a 1-pass overwrite (sfill -ll), which is what you requested.
 
Old 02-04-2009, 12:26 PM   #13
taylorkh
Senior Member
 
Registered: Jul 2006
Location: North Carolina
Distribution: CentOS 6, CentOS 7 (with Mate), Ubuntu 16.04 Mate
Posts: 2,127

Original Poster
Rep: Reputation: 174Reputation: 174
I did in fact use the -ll option with sfill. I just now tried it on a file system with 4 GB free. It created the wipe file in perhaps 20 minutes then just sat there for the next half hour. Who knows. On the other hand I put together this script which seems to work quite quickly. It filled 84 GB in 27 minutes. Perhaps not as good as other methods but I can schedule it to run on a routine basis.
Quote:
#!/bin/bash
#
# start clean, remove any files from prior attempt
#
rm ~/Desktop/random.txt
rm /data2/random.fil
echo Starting wipe > ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
#
# create a 10 MiB file of random characters
#
dd if=/dev/urandom of=~/Desktop/random.txt bs=1024 count=10240
echo Source file created >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
#
# is there space left on the target file system?
#
SP=$(df -H /data2 | grep -v '^Filesystem' | awk '{ print $5}'| cut -d'%' -f1)
#
# now fill up the target file system with random rubbish
#
while true; do
if [ $SP -lt 100 ]; then
cat ~/Desktop/random.txt >> /data2/random.fil
else
echo Filesystem full >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
rm -f /data2/random.fil
echo wipe file deleted >> ~/Desktop/wipe.log
echo $(date +%y/%m/%d_%r) >> ~/Desktop/wipe.log
break
fi
SP=$(df -H /data2 | grep -v '^Filesystem' | awk '{ print $5}'| cut -d'%' -f1)
done
 
Old 04-19-2009, 10:21 AM   #14
erikalm
LQ Newbie
 
Registered: Jan 2007
Distribution: Ubuntu Edgy
Posts: 7

Rep: Reputation: 0
/dev/zero is faster, /dev/urandom is better!

Hello,

I've performed a little test, comparing /dev/zero to /dev/urandom, with different bs values. Looking solely at performance in speed the result clearly favors going with /dev/zero. The type of in file (zero or urandom) makes a lot of difference, whereas the block size makes almost none. The difference observed can be due to other things going on in the system - I have not performed this test on a system doing nothing more than the test.

I used a 3GHz dual core AMD with 6 GB of memory, a SATAII software raid1 disk mounted with LVM.
Code:
dd					time
if=		bs=		MB/s	real	user	sys
/dev/zero	1048576		64.5	 2m34s	 0m0s	 0m25s
/dev/zero	1024		61.3	 2m42s	 0m1s	 1m17s
/dev/zero	524288		60.9	 2m43s	 0m0s	 0m42s

/dev/urandom	1048576		11.1	14m56s	 0m0s	14m37s
/dev/urandom	1024		10.4	15m58s	 0m1s	15m23s
Even if you have another set up than I, you should get about the same differences in speed between zero and urandom.

So, given that zero is so much faster than urandom I had to go out there and figure out why you'd wipe a drive with random data at all. I mean my first thought was, hey, it's magnetism, if we wipe it quickly with zeros we're able to do it many times, and that'll do the trick. The only difference between zeros and random would be that someone could tell you'd wiped the drive or not, and only Jack Bauer or James Bond would need to hide that fact... right?

Wrong! Reading up a little on disk wiping and data recovery (http://www.cs.auckland.ac.nz/~pgut00...ecure_del.html) tells another story. It is, in fact, a bad idea to wipe the drive with just zeroes or just ones. However, the author of above paper, also states (in the epilogue) that modern drives won't need more than a couple of random wipes before it gets practically impossible to recover data from them. The 36 pass Gutman wipe was invented in a time when disk density was way lower than it is today, and further more, it is designed to cover a number of different storage techniques, so, according to Gutman himself (same link, epilogue) the 36 pass wipe was never intended for practical use unless you had virtually no idea what drives you had or how old they were...

That said, I've amended the script I use to wipe free space on my drives to look like this (in fact, after all is said and done, this is the original version of said script!):

Code:
#!/bin/bash
FSLIST="/var/share /var/files /var/backup/mirror"
SCRATCH_FILE="DELETE_ME"

for FS in $FSLIST; do
	set +e +u;
	dd if=/dev/urandom of="${FS}/${SCRATCH_FILE}";
	sync; sync;
	rm "${FS}/${SCRATCH_FILE}";
	sync; sync;
done
You put the path to the file systems you wish to wipe in the FSLIST-variable, separated by a space. The program then creates a file in the root of each of these systems, in turn, and fills it with data from urandom. Once the drive is full dd will fail and return control to the script. The sync command forces any data still in buffers to the disk. After the file system has been filled, the scratch file is deleted and the next file system is taken care of.

As you can see from the above experiment the urandom method is about five times as slow as the zero method, but given the information from Gutman's paper, using zeros is just a waste of your time. If you have deleted data you feel needs to be kept out of the hands of a third party by using a free space wipe then you should go with random wipes.

/Erik

Last edited by erikalm; 04-19-2009 at 10:42 AM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Help removing free space to create Raw free space nightmare55 Linux - Newbie 11 10-01-2009 03:07 AM
wipe free space with Linux? bigrobot Linux - Software 2 02-19-2005 04:08 AM
Free data wipe tools for Unix davholla Linux - General 1 12-21-2004 11:35 AM
Not enough free space on hard drive with 50g of free space??? auoq SUSE / openSUSE 5 10-13-2004 08:21 PM
Formating free space: WinXP pro and RH9 dualboot with free space on 3rd drive Vermicious Linux - General 2 03-22-2004 05:10 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 01:10 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration