LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (http://www.linuxquestions.org/questions/linux-software-2/)
-   -   find | xargs | shred seems really slow (http://www.linuxquestions.org/questions/linux-software-2/find-%7C-xargs-%7C-shred-seems-really-slow-845086/)

Bertical 11-18-2010 11:10 AM

find | xargs | shred seems really slow
 
What am I doing wrong here ? Shredding a directory with files is incredibly slow. If I create a file of similar size it is around 1000 times faster. Filesystem is ext4, OS is Slack current.

Looking at top it shows shred's status as 'D' which, according to the man page, means uninterruptible sleep ! Also /usr/bin/time shows around 10 minutes but that time was printed on stdout approx. 7 minutes before the command prompt reappeared !

Code:


[virgil@thunderbird2:~/q] $ du -sh
336M        .
[virgil@thunderbird2:~/q] $ find . -type d | wc
    160    160  12038
[virgil@thunderbird2:~/q] $ find . -type f | wc
  6721    6721  687549
[virgil@thunderbird2:~/q] $ /usr/bin/time find . -type f -print0 | xargs -0 -r shred -zu
0.00user 0.00system 10:11.76elapsed 0%CPU (0avgtext+0avgdata 3616maxresident)k
0inputs+0outputs (0major+285minor)pagefaults 0swaps
[virgil@thunderbird2:~]

[virgil@thunderbird2:~] $ dd if=/dev/urandom of=qwerty bs=336M count=1
1+0 records in
1+0 records out
352321536 bytes (352 MB) copied, 51.6693 s, 6.8 MB/s
[virgil@thunderbird2:~]
[virgil@thunderbird2:~] $ ls -l qwerty
-rw-r--r-- 1 virgil virgil 352321536 2010-11-18 15:13 qwerty
[virgil@thunderbird2:~]
[virgil@thunderbird2:~] $ /usr/bin/time shred -zu qwerty
0.92user 0.78system 0:16.94elapsed 10%CPU (0avgtext+0avgdata 2912maxresident)k
0inputs+2752528outputs (0major+235minor)pagefaults 0swaps

thanks

David the H. 11-19-2010 08:17 AM

I don't know exactly why you're showing such performance, although I suppose any command that has to work through the kernel's I/O system will likely be slower than one like dd that can do direct disk operations. Not to mention that shred needs to overwrite the disk several times in order to destroy data, as opposed to simply creating a file, which takes only a single write operation. There may be buffering effects going on as well.

The find | xargs pipe chain can only slow it down even more. But you really shouldn't need to use xargs here anyway. find has the equivalent ability to apply commands on it's own, using the -exec option.
Code:

find . -type f -exec shred -zu '{}' \+
The \+ at the end of the -exec command makes it perform the same way xargs does, with as many files as possible concatenated into a single run. If your command can't handle the multiple inputs, you'd have to use \; instead, which will make it run separately for each file find feeds it.

Bertical 11-19-2010 10:33 AM

Thanks for the informative reply. Using find this way speeds it up considerably. I don't know what has changed but this slow deletion is a recent thing. I run a cron job that tidies up the log files and also shreds all the files in the squid proxy cache. This directory is always around 93M in size so the change in time the script was running was quite noticeable. I did change to ext4 from ext3 around a month ago, I wonder if this is the reason. I'll have to investigate further.

H_TeXMeX_H 11-19-2010 10:36 AM

Also make sure the HDD is not failing.


All times are GMT -5. The time now is 08:37 PM.