LinuxQuestions.org
Latest LQ Deal: Linux Power User Bundle
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 04-14-2008, 04:36 AM   #1
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Rep: Reputation: 125Reputation: 125
Harddisk shredding uncredibly slow - any speed-up possible?


Hi all,

after I bought a new internal harddisk I wanted to take the chance and encrypt some of my harddisks and encrypt them with luks.
After trying to use dd (feeded from /dev/urandom) to fill the disks with random data I already switched to "shred" which looks to the task quicker, although it still seems to take ages. I know the disks are not very small, but I wonder if this can be normal.

The first disk, an internal SATA I drive with 500 GB, took about 24 hours for one pass of random data. The second disk, external and connected by USB 2.0 with 400 GB, will take about 24 hours for one run. Since it is recommended to make not one but several runs with patterns or random data per disk, I wonder if I'm doing something wrong that it takes sooooo long to shred the disks. About one week (24/7) for 7 runs over ONE harddisk seems way to long to me.

I'm a bit clueless here, so any comments and questions are welcome. Some info that might be useful to exclude first suspicions: The chipset is NVidia nForce4, the internal SATA disk is recognized without any problem, in my kernel config AHCI is enabled.

P.S.: I'm not bound to use shred. From what I read dban (http://dban.sourceforge.net/) tries to wipe ALL available harddisks which is not appropriate here. I could theoretically switch off the other drives, but need my computer to work on nearly every day, so this is no choice .

Any ideas are highly appreciated!
 
Old 04-14-2008, 05:20 AM   #2
ludist
Member
 
Registered: Nov 2005
Location: Greece
Distribution: Slackware
Posts: 132

Rep: Reputation: 16
SUPPOSE, "speed of write" is 60Mb/s

400G = (approx.) 400.000Mb

400.000 / 60 = 6666 (errr....) seconds

6666 seconds = 111 minutes hmmm... something indeed is wrong. Suppose the random data slows you down?

USB-2 is FAR slower than 60Mb/s I think at 17Mb/s. About six hours.
 
Old 04-14-2008, 05:42 AM   #3
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Original Poster
Rep: Reputation: 125Reputation: 125
Quote:
Originally Posted by ludist View Post
SUPPOSE, "speed of write" is 60Mb/s
[...] 111 minutes hmmm... something indeed is wrong. Suppose the random data slows you down?

USB-2 is FAR slower than 60Mb/s I think at 17Mb/s. About six hours.
Yes, I wonder what slows it down that much. That problem is beyond what I've done so far with my box and what I know about it.

I got to work now but will check the thread later this evening.
 
Old 04-14-2008, 06:02 AM   #4
pwc101
Senior Member
 
Registered: Oct 2005
Location: UK
Distribution: Slackware
Posts: 1,847

Rep: Reputation: 128Reputation: 128
By default, shred will overwrite the data with a series of different data (random, all zeroes etc.) 25 times. Thus, if your file is 500 GB, then it will actually need to write 25 x 500 GB = 12,500 GB = 12.5 TB. This will take a very long time at 25MB/s (which is what USB2 really writes at in my experience).

You can tell shred to reduce (or increase) the number of times it overwrites the data with -n. From info shred:
Quote:
Originally Posted by info shred
`-NUMBER'
`-n NUMBER'
`--iterations=NUMBER'
By default, `shred' uses 25 passes of overwrite. This is enough
for all of the useful overwrite patterns to be used at least once.
You can reduce this to save time, or increase it if you have a lot
of time to waste.
The random data takes longer to write than the zeroes because it has to generate random data. As far as I'm aware, what shred does is essentially the same as dd, except it does it 25 times with different data input. For more info on what it's doing, I find it useful to add the --verbose flag to the command, at least to get an idea of what's going on.

Are you sure your disks are performing at optimal speeds? Perhaps you could test them using hdparm -T (or -t) to check all is well with them.
 
Old 04-14-2008, 11:12 AM   #5
Randux
Senior Member
 
Registered: Feb 2006
Location: Siberia
Distribution: Slackware & Slamd64. What else is there?
Posts: 1,705

Rep: Reputation: 54
You can run dban in default mode which finds and destroys all your drives (unless there's a bad sector in which case it blindly tells you everything's fine whilst it actually does nothing). But there is a mode to select drives from a menu. You don't have to dban all your drives. It's probably worth a go but it is also slow.

I find it takes a day to dban drives that size. 80G drives take overnight. I don't know why ludist's calculations don't work but they don't match up with my experience on dbanning drives.
 
Old 04-14-2008, 02:49 PM   #6
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Original Poster
Rep: Reputation: 125Reputation: 125
Many thanks for the responses so far!

Quote:
Originally Posted by pwc101 View Post
By default, shred will overwrite the data with a series of different data (random, all zeroes etc.) 25 times. Thus, if your file is 500 GB, then it will actually need to write 25 x 500 GB = 12,500 GB = 12.5 TB. This will take a very long time at 25MB/s (which is what USB2 really writes at in my experience).
Sorry I didn't mention that. I used the -n switch to execute only one run for testing, but when I saw the time it took I decided to let it run only once for now.

Quote:
Originally Posted by pwc101 View Post
Are you sure your disks are performing at optimal speeds? Perhaps you could test them using hdparm -T (or -t) to check all is well with them.
Hm, no. The result of my external USB harddisk:
Code:
/dev/sdc1:
 Timing cached reads:   964 MB in  2.00 seconds = 481.49 MB/sec
 Timing buffered disk reads:   74 MB in  3.00 seconds =  24.65 MB/sec
internal SATA I harrdisk:
Code:
/dev/sda1:
 Timing cached reads:   944 MB in  2.00 seconds = 471.32 MB/sec
 Timing buffered disk reads:  208 MB in  3.02 seconds =  68.82 MB/sec
There is not much variation if I repeat it several times, like the man page of hdparm suggests. The buffered disk reads are always around 24 MB/sec and 68 MB/sec. Sounds reasonable from the value you gave me, I think. That ist of course the reading speed. Don't know if the writing speed can differ much from normal values if reading is ok.

I just started shred again for fun, with the default values, only verbose output. The writing speed it indicates is slow, around 4,7 MB/s to the external disk. A usual "cp" of a large file shows around 25 MB/s.

Quote:
Originally Posted by Randux View Post
You can run dban in default mode which finds and destroys all your drives (unless there's a bad sector in which case it blindly tells you everything's fine whilst it actually does nothing). But there is a mode to select drives from a menu. You don't have to dban all your drives. It's probably worth a go but it is also slow.
That's good to know, I probably overread it when I looked at the web page. I'll have a look into it and test if it speeds the thing up.

Quote:
Originally Posted by Randux View Post
I find it takes a day to dban drives that size. 80G drives take overnight. I don't know why ludist's calculations don't work but they don't match up with my experience on dbanning drives.
Which puzzles me most is that if I copy files it takes also long, but not THAT long, so the *feeled* speed is ok when doing everyday stuff on my box. I wonder if it's more a matter of random number generation which is of course time and cpu consuming than a matter of disk speed. At least the difference between a normal file copy and shred speed looks this way to me ...
 
Old 04-15-2008, 03:06 AM   #7
pwc101
Senior Member
 
Registered: Oct 2005
Location: UK
Distribution: Slackware
Posts: 1,847

Rep: Reputation: 128Reputation: 128
Quote:
Originally Posted by titopoquito View Post
Which puzzles me most is that if I copy files it takes also long, but not THAT long, so the *feeled* speed is ok when doing everyday stuff on my box. I wonder if it's more a matter of random number generation which is of course time and cpu consuming than a matter of disk speed. At least the difference between a normal file copy and shred speed looks this way to me ...
Generating the random numbers will definitely take much longer. If you allow shred to run without specifying -n, then using a tool such as gkrellm, you can see that when it's writing zeroes, ones, and other non-random data, the write speed to the disk is (at least for me) almost 25MB/s on an external USB2 drive. Conversely, when shred is writing random data, the speed plummets to below a few kilobytes a second, and my CPU usage shoots to 100% for shred.
 
Old 04-15-2008, 05:02 AM   #8
ludist
Member
 
Registered: Nov 2005
Location: Greece
Distribution: Slackware
Posts: 132

Rep: Reputation: 16
what about

Code:
dd if=/dev/random of=/mnt/mountpoint/OneHugeFileToDelete
dd is dangerous: study.
 
Old 04-15-2008, 05:25 PM   #9
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Original Poster
Rep: Reputation: 125Reputation: 125
dban didn't like my computer. I removed the power from all critical (internal) drives, but it wouldn't see my USB drive.

Quote:
Originally Posted by pwc101 View Post
Generating the random numbers will definitely take much longer. If you allow shred to run without specifying -n, then using a tool such as gkrellm, you can see that when it's writing zeroes, ones, and other non-random data, the write speed to the disk is (at least for me) almost 25MB/s on an external USB2 drive. Conversely, when shred is writing random data, the speed plummets to below a few kilobytes a second, and my CPU usage shoots to 100% for shred.
Yes, I more and more think the random numbers are the bottleneck. Since shred starts with a random number run, I haven't seen the pattern writing.

Quote:
Originally Posted by ludist
what about
Code:
dd if=/dev/random of=/mnt/mountpoint/OneHugeFileToDelete
Not a real solution, /dev/random is soooooooooooooo slow compared to /dev/urandom that the thought alone is causing aches to me
/dev/urandom gives me about 4,8 MB/s. /dev/random is calculated as 0,0 kB/s

I think my best choice is to use shred with a limited number of runs and maybe start it from a 64 bit Live CD to get faster /dev/urandom output.
Thanks for all answers. My conclusion: Disk wiping, especially with an USB connected harddisk, IS simply slow
 
Old 04-15-2008, 06:14 PM   #10
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,359
Blog Entries: 55

Rep: Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546Reputation: 3546
Quote:
Originally Posted by titopoquito View Post
dban didn't like my computer. I removed the power from all critical (internal) drives, but it wouldn't see my USB drive.
As alternative you could use Bcwipe, but



Quote:
Originally Posted by titopoquito View Post
Yes, I more and more think the random numbers are the bottleneck. Since shred starts with a random number run, I haven't seen the pattern writing.
...here's a nice writeup from the LFS people that'll make you happy: http://www.linuxfromscratch.org/hint...es/entropy.txt

Or in numbers, urandom:
Code:
 time dd if=/dev/urandom bs=1M count=10 of=/var/tmp/speed 
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 9.07479 seconds, 1.2 MB/s
real    0m9.084s
user    0m0.001s
sys     0m9.039s
And here's erandom:
Code:
time dd if=/dev/erandom bs=1M count=10 of=/var/tmp/speed 
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.52803 seconds, 19.9 MB/s
real    0m0.546s
user    0m0.004s
sys     0m0.541s
Smile :-]
 
Old 04-16-2008, 01:08 AM   #11
pdw_hu
Member
 
Registered: Nov 2005
Location: Budapest, Hungary
Distribution: Slackware, Gentoo
Posts: 346

Rep: Reputation: Disabled
I dd-ed my 80G partition just like this. It was an 5400rpm, UDMA/100 laptop harddrive.
Took about 16hours :/
 
Old 04-16-2008, 05:18 AM   #12
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Original Poster
Rep: Reputation: 125Reputation: 125
Quote:
Originally Posted by unSpawn View Post
As alternative you could use Bcwipe, but


...here's a nice writeup from the LFS people that'll make you happy: [...] And here's erandom:
Code:
time dd if=/dev/erandom bs=1M count=10 of=/var/tmp/speed 
10+0 records in
10+0 records out
10485760 bytes (10 MB) copied, 0.52803 seconds, 19.9 MB/s
real    0m0.546s
user    0m0.004s
sys     0m0.541s
Smile :-]
I do so! You made my day, unspawn.
 
Old 04-16-2008, 08:03 AM   #13
ledow
Member
 
Registered: Apr 2005
Location: UK
Distribution: Slackware 13.0
Posts: 241

Rep: Reputation: 34
Proper disk wiping is slow, that's all.

I wipe disks regularly on behalf of the schools I work at. Basically, it just takes forever. And the more "secure", you want it, the longer it takes - passes, randomness, etc. You can increase your randomness in various ways (such as entropy daemons etc.) but it's a waste of time most of the time - just leave it running overnight.

If I'm being asked to destroy known sensitive data, then I do a couple of secure passes with random data and just leave it running forever (I wouldn't bother with USB drives, I'd either dismantle them and do it over IDE/SATA or I'd just destroy USB keys and supply new ones). If I'm selling my old disks on eBay, I do a couple of passes just to defeat casual analysis by people without extremely expensive data recovery hardware.

Once, I even took home a batch of disks (which didn't have critical data on them but which needed to be destroyed) and just smashed them to pieces with a sledgehammer and then threw them onto a bonfire. It was infinitely quicker than wiping them.

Our recycling company says that they do data destruction too and I absolutely do not believe that they actually do anything but destroy the physical media - they can't/wouldn't wipe every disk securely before recycling for the amount/types of/state of disks we send them. They must just destroy them entirely and then supply new drives.

Destroy the drive or leave it running overnight, unless you're planning on wiping dozens at a time constantly. Even then, I'd think that the hardware to do it securely would cost a lot more than 12/24/48 old machines just churning away at it overnight.
 
Old 04-16-2008, 09:02 AM   #14
titopoquito
Senior Member
 
Registered: Jul 2004
Location: Lower Rhine region, Germany
Distribution: Slackware 14.1 (32 and 64 bit)
Posts: 1,594

Original Poster
Rep: Reputation: 125Reputation: 125
Quote:
Originally Posted by ledow View Post
If I'm being asked to destroy known sensitive data, then I do a couple of secure passes with random data and just leave it running forever (I wouldn't bother with USB drives, I'd either dismantle them and do it over IDE/SATA or I'd just destroy USB keys and supply new ones).
I will do that for a second external disk, which has no more vendor guarantee. I never realized how slow USB is, probably because I work in parallel while copying etc. takes place.

Quote:
Originally Posted by ledow View Post
Once, I even took home a batch of disks (which didn't have critical data on them but which needed to be destroyed) and just smashed them to pieces with a sledgehammer and then threw them onto a bonfire. It was infinitely quicker than wiping them.
Nice. Surely satisfying if you know how long it else takes
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
slow transfer rate on firewire400 external harddisk the.madjack Linux - Hardware 0 11-05-2007 07:05 AM
NIC transfer speed fast... receive speed slow landev Linux - Networking 5 11-07-2006 03:09 PM
harddisk read speed is too slow halcyon Linux - Hardware 4 10-05-2004 09:26 AM
shredding everything within a directory Smokey Slackware 24 07-31-2004 05:45 AM
Shredding Files. jinksys Linux - Software 4 09-06-2003 07:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 12:17 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration