LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 07-16-2004, 12:32 AM   #16
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34

Quote:
Originally posted by jschiwal
I think using the find command is best in this case
find ./ -type f -exec shred -zuv {} \;
will shred all files in the directory and subdirectories.
Yes, that's a variation of the command I used in the shreddir script I presented above:
Code:
find * -type f -maxdepth 0 | while read i ; do $SHRED "$i" ; done
which can be rewritten as:
Code:
find * -type f -maxdepth 0 -exec $SHRED {} \;
I just piped the find ouput through a 'while' loop instead of using the '-exec' switch.........Both accomplish the same thing, however using the '-exec' switch as you've shown would probably be slightly faster (maybe a few milliseconds per file found) than running it through a loop since it will execute the 'shred' command immediately as each file is found.......


PS: I suppose I've gotten into the habit of piping the output of the 'find' command to another command, that I never think of ways to use the '-exec' switch instead......

Last edited by thegeekster; 07-16-2004 at 03:01 AM.
 
Old 07-28-2004, 10:19 AM   #17
subekk0
Member
 
Registered: Sep 2003
Location: Dallas, TX.
Distribution: Slacking since '94
Posts: 153

Rep: Reputation: 32
I thought the reason why "shred" doesn't work on "reiser and ext3 " is because it can't wipe the whitespace.
 
Old 07-28-2004, 01:00 PM   #18
Smokey
Member
 
Registered: Jul 2004
Distribution: Slackware
Posts: 313

Original Poster
Rep: Reputation: 30
Quote:
Originally posted by thegeekster
Okay, I just made this little script for you and anyone else, too, which will shred the contents of a directory (I was already working on this before I saw your last post ).................This script can be run from anywhere, all you need to do is to supply the name of the directory (with the path if needed) and it will shred the contents of the directory and will only shred the files found in that directory.................NOTE: This script will not shred the contents of any subdirectories, recursively.........only the files found in the named directory. You can change the options passed to the 'shred' command by making changes in the options for the SHRED variable (such as the number of passes) by using the same options the 'shred' command accepts:
Code:
#!/bin/sh
#*******************************************************************************
# Name: shreddir

SHRED="`which shred` -uvzn 2"

[ -z "$1" -o ! -d "$1" ] && echo "
  Usage: $0 <directory>

NOTE: You must supply the name of a single directory, or include the path to the directory.
" && exit

( cd $1 ; find * -type f -maxdepth 0 | while read i ; do $SHRED "$i" ; done )
Just copy-n-paste into a text file and name it "shreddir", then put it in the /usr/local/bin directory. After putting it in the /usr/local/bin directory, be sure to make it executable by running the 'chmod' command:

chmod 755 /usr/local/bin/shreddir

You will need to be root to put it in the /usr/local/bin directory and make it executable with the 'chmod' command. After that anyone will be able to run this script as long as they have the proper permissions on the files being shredded.

this works perfeclty fine but how can I change the number of times it overwrites? you have it as -2, but how can I change that on commandline?


Quote:
Originally posted by thegeekster
Yes, that's a variation of the command I used in the shreddir script I presented above:
Code:
find * -type f -maxdepth 0 | while read i ; do $SHRED "$i" ; done
which can be rewritten as:
Code:
find * -type f -maxdepth 0 -exec $SHRED {} \;
I just piped the find ouput through a 'while' loop instead of using the '-exec' switch.........Both accomplish the same thing, however using the '-exec' switch as you've shown would probably be slightly faster (maybe a few milliseconds per file found) than running it through a loop since it will execute the 'shred' command immediately as each file is found.......


PS: I suppose I've gotten into the habit of piping the output of the 'find' command to another command, that I never think of ways to use the '-exec' switch instead......
but how can you specify the directory with this? where?
 
Old 07-28-2004, 01:36 PM   #19
Cedrik
Senior Member
 
Registered: Jul 2004
Distribution: Slackware
Posts: 2,140

Rep: Reputation: 244Reputation: 244Reputation: 244
Right after the find command, say you want shred files in /home/smokey/scandalousfiles directory
Code:
find /home/smokey/scandalousfiles/* -type f -maxdepth 0 -exec $SHRED {} \;
 
Old 07-29-2004, 01:23 PM   #20
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34
Quote:
Originally posted by Smokey
this works perfeclty fine but how can I change the number of times it overwrites? you have it as -2, but how can I change that on commandline?...
If you want to be able to specify the number of overwrites through the command line when using the script, then change the number "2" in the SHRED variable at the top to $2 (Note: $2 is a built-in positional variable which tell bash to substitute the second argument on the command line).......It should read like this:

SHRED="`which shred` -uvzn $2"

Quote:
...but how can you specify the directory with this? where?
Cedrik answered this question, but if you want to use it in the script, use it just the way I have it...............Replace the find command in the script which follows "( cd $1 ;" exactly as I have shown so it looks like this:

( cd $1 ; find * -type f -maxdepth 0 -exec $SHRED {} \; )

That first part with the "cd $1" means to change to the directory supplied on the command line ($1 is the built-in positional variable telling bash to substitute the first argument on the command line)..............What you are doing is going into the target directory itself before shredding the files in that directory............

If you plan on supplying the number of passes through the command line arguments, then you should also change the error-checking routine to reflect this.............Here's a rewrite of the script taking all these changes into account:
Code:
#!/bin/sh
#*******************************************************************************
# Name: shreddir

SHRED="`which shred` -uvzn $2"

[ $# -ne 2 -o ! -d "$1" ] && echo "
  Usage: $0 <directory> <number of passes>

NOTE: You must supply the name of a single directory, or include the 
path to the directory for the first argument, followed by the number 
of passes to overwrite.
" && exit

( cd $1 ; find * -type f -maxdepth 0 -exec $SHRED {} \; )
Remember, this will only work on the files is the given directory supplied on the commandline and will ignore any subdirectories..........This is more for safety to avoid unintentionally wiping files in other directories that may be wanted or needed...........

Last edited by thegeekster; 07-29-2004 at 01:43 PM.
 
Old 07-29-2004, 01:30 PM   #21
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34
Quote:
Originally posted by subekk0
I thought the reason why "shred" doesn't work on "reiser and ext3 " is because it can't wipe the whitespace.
Shred may have problems with journaled filesystems, but Smokey says he's using ext2, which is not a journaled filesystem, so shred will do what it's supposed to do.........
 
Old 07-29-2004, 05:30 PM   #22
Shade
Senior Member
 
Registered: Mar 2003
Location: Burke, VA
Distribution: RHEL, Slackware, Ubuntu, Fedora
Posts: 1,418
Blog Entries: 1

Rep: Reputation: 46
Geekster, what I'm imagining isn't a once-size fits all shred command, something more along the lines of codecs for video players... Do you consider mplayer to be bloated?

You could configure shred at compile time with the filesystems you use...
I'd hardly consider that bloat.

Maybe it's time to pick up that C book again and learn the innards of my filesystems :-)

--Shade
 
Old 07-30-2004, 06:32 PM   #23
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34
Quote:
Originally posted by Shade
Geekster, what I'm imagining isn't a once-size fits all shred command...
Why not?..........Especially if it can be done simply and efficiently...........Otherwise that "modular" approach" has merit.......

Quote:
You could configure shred at compile time with the filesystems you use...
I'd hardly consider that bloat....
That would be hard to do since shred is part of the coreutils package and there isn'a a way to pass any special configuration options for shred............You would have to go to the source file (shred.c) in the coreutils package and modify it directly before compiling (maybe even making a patch for others to use)...........


But I just thought of a workaround (a variation of my idea above) which just might work on journaled or log-based filesystems.............Why not create an empty sparse file which fills up the remaining free space just before shredding the target files and removing that empty file afterward.........

Creating an empty sparse file is a trivial matter using the 'dd' command, and is _much_ faster to create than creating a dummy file filled with zeros.................You would just need to know the remaining amount of free space (as the number of blocks) on the partition that contains the target files.................Which is where the 'df' command comes in handy.........

For example, to find and isolate the amount of free space where the file to be shredded resides, using a block size of 512 bytes (the default size for 'dd'), you would run this command:
Code:
df -B512 FILENAME | grep -v 'Filesystem' | awk '{ print $4 }'
"FILENAME" is the name of the file to be shredded. This will ouput the amount of free space as a number of 512-byte blocks...............Now you can create the dummy sparse file with 'dd' command using the output of the 'df' command like so:
Code:
dd if=/dev/zero of=dummy count=0 seek=DFOUT
"DFOUT" would be the output of the previous 'df' command and you would have to make sure that the "dummy" file is created in the same partition, but in a different directory, where the file to be shredded is located.....................Since you're not actually writing anything to the disk, just merely creating the boundaries of a file of a certain size, the process should only take as long as the 'seek' option takes in the 'dd' command...........

I haven't tested this, but it seems like a feasible thing to do -- that is, if the filesystem in question will allow overwriting of files in place............Of course, this would be absolutely useless on any type of RAID setup since the purpose of RAID is redundancy, or where any kind of automatic backup system is in place which take snapshots every few minutes...........


PS: I'm using the term "sparse" rather loosely.............As I understand it, a sparse file merely has isolated bits of data with large areas of non-data (zeros) scattered throughout the file..............Usually, a filesystem which can properly handle sparse files will not allocate any space to the empty portions, but only retain pointers to the valid data, thus saving on disk space..........Here, I'm allocating all the free space to one big empty file to use up all the available disk space without actually writing anything to the disk in order to save some time..............

Last edited by thegeekster; 07-30-2004 at 07:22 PM.
 
Old 07-30-2004, 08:03 PM   #24
thegeekster
Member
 
Registered: Dec 2003
Location: USA (Pacific coast)
Distribution: Vector 5.8-SOHO, FreeBSD 6.2
Posts: 513

Rep: Reputation: 34
Oh well, scratch that idea..................It won't work on reiserfs because reiserfs will treat that dummy file as a sparse file...................On my Root partition ( / ), I have about 1.3G free space................After creating that dummy sparse file, the 'ls -l' command shows it as being a 1.3G file, but the 'df' command still shows 1.3G of free space left...................
 
Old 07-31-2004, 05:45 AM   #25
gnashley
Amigo developer
 
Registered: Dec 2003
Location: Germany
Distribution: Slackware
Posts: 4,928

Rep: Reputation: 612Reputation: 612Reputation: 612Reputation: 612Reputation: 612Reputation: 612
A sparse file on linux will show it's full size with ls -l or du, but only occupies the space of the data in the file. If you put the sparse file on windows it will OCCUPY it's full stated size.

Last edited by gnashley; 07-31-2004 at 05:49 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Autozipping files from 1 directory & dropping them in other directory ??? amitsharma_26 Linux - Networking 5 10-22-2005 06:09 AM
shell script: delete all directories named directory.# except directory.N brian0918 Programming 3 07-13-2005 06:54 PM
Automatically Copying files from the ftp directory into the html directory swatward Linux - General 3 04-17-2005 10:55 PM
write permissions for directory - not accidently move/deleted the directory linuxgamer Linux - Newbie 10 12-02-2003 03:04 AM
Shredding Files. jinksys Linux - Software 4 09-06-2003 07:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 10:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration