shredding everything within a directory
Sometimes I have a directory that I would like to completely 'shred', but since the 'shred' command cannot shred entire directories at once, how can I make it so that it would 'shred' file by file? This is not possible by what is given in commandline right? Would I need to look into a python or perl script to do this? Thank you. :study:
|
Code:
for i in * ; That'll do all files, but probably spit an error out when it hits the directories. However... I just read in the manpage for shred that: Quote:
You're almost certainly running Reiser or EXT3... --Shade |
I currently have ext2 :)
|
Hey, Shade
I've been giving some thought to that bit about the journaled filesystems when I came acorss it several weeks ago and an idea I came up with is to create a dummy file that fills up the whole partition and then you can shred that file if you want with something like this: dd if=/dev/zero of=dummy Let that command run until it runs out of disk space............Then you can run the command: shred dummy The theory is since there is no more room on the partition, it will have to overwrite the file in place..........and the shredding is optional since the dummy file created is an empty file made up of zero bits......you can merely 'rm dummy'............of course if you're the paranoid type, then shred the file.......... ;) However, the thing to be aware of using this method is since there is no more room on the disk, there can be no new files created on the disk, such a tmp files created by the OS, so it would be best to log out of any GUIs and run as few apps and services as possible while doing this................for data partitions, this shouldn't be a problem........... Another thing to remember is this can take a very long time to do..........the dummy file created will be a very large file whcih can take a long time to create....................and if you shred it, too..................well you get the idea........ I haven't tried this because I'm not the paranoid type but I thought I'd share this for those who may be interested in my :twocents:................ :) |
Hey, I jsut though of a variation on my theme above...................this shouldn't take quite as long....
Before removing any files you want to shred, first create the dummy file with the 'dd' command I showed above...........then run the shred command on the files to be shredded................after which you can 'rm dummy'............ For this to work correctly, the dummy file can't be in the same directory as the files to be shredded or it will be shredded along with the rest with that 'for' loop above, which will take forever to accomplish (shredding the dummy file).............you can do somehting like this: Code:
dd if=/dev/zero of=../dummy |
ok but what if you want to use any of the shred switches?
such as: shred -n 555 -uvz filename.format |
Quote:
Code:
for i in * ; do shred -n 555 -uvz $i ; done |
To avoid errors if one of the "files" is a subdirectory from the 'for' loop is to modify the command to test if "$i" is a file:
Code:
for i in * ; do [ -f $i ] && shred -n 555 -uvz $i ; done |
I dont know how to execute this code that you guys told me about
Code:
for i in * ; do shred -n 555 -uvz $i ; done |
Okay, I just made this little script for you and anyone else, too, which will shred the contents of a directory (I was already working on this before I saw your last post :D).................This script can be run from anywhere, all you need to do is to supply the name of the directory (with the path if needed) and it will shred the contents of the directory and will only shred the files found in that directory.................NOTE: This script will not shred the contents of any subdirectories, recursively.........only the files found in the named directory. You can change the options passed to the 'shred' command by making changes in the options for the SHRED variable (such as the number of passes) by using the same options the 'shred' command accepts:
Code:
#!/bin/sh chmod 755 /usr/local/bin/shreddir You will need to be root to put it in the /usr/local/bin directory and make it executable with the 'chmod' command. After that anyone will be able to run this script as long as they have the proper permissions on the files being shredded. :) |
Geekster, some nice stuff.
I was thinking about how to get something working on a journalled system as well, after I had read an article in MacWorld about MacOSX's "srm" or secure remove command... I found shred, and wondered how similar they are. There has to be a better way than filling up the entire partition though. I'm going to do some more research on this. --Shade |
Shade
While it does take a little time creating that dummy file (the time will vary depending on how much free space there is on the drive or partition), it does have the added benefit of zero'ing the free space on the drive, even if you don't shred that large dummy file..... ;) But for a quick shred, yeah, it would be nice to find a different way of doing it on journaled filesystems........ :) |
Thought --
Would it be possible to analyze inode info against journal info to "shred" the exact areas on the disk the file is stored? Perhaps a patch to shred could be developed. --Shade |
That would seem to be the best approach, finding the actual locations the file is stored at on the disk and just targeting those locations..........but with all the different journaled filesystems, and not just the Linux ones, that would seem to be a pretty tall order...............and might bloat shred quite a bit......... :)
|
I think using the find command is best in this case
find ./ -type f -exec shred -zuv {} \; will shred all files in the directory and subdirectories. |
Quote:
Code:
find * -type f -maxdepth 0 | while read i ; do $SHRED "$i" ; done Code:
find * -type f -maxdepth 0 -exec $SHRED {} \; PS: I suppose I've gotten into the habit of piping the output of the 'find' command to another command, that I never think of ways to use the '-exec' switch instead...... ;) |
I thought the reason why "shred" doesn't work on "reiser and ext3 " is because it can't wipe the whitespace. :scratch:
|
Quote:
Quote:
|
Right after the find command, say you want shred files in /home/smokey/scandalousfiles directory
Code:
find /home/smokey/scandalousfiles/* -type f -maxdepth 0 -exec $SHRED {} \; |
Quote:
SHRED="`which shred` -uvzn $2" Quote:
( cd $1 ; find * -type f -maxdepth 0 -exec $SHRED {} \; ) That first part with the "cd $1" means to change to the directory supplied on the command line ($1 is the built-in positional variable telling bash to substitute the first argument on the command line)..............What you are doing is going into the target directory itself before shredding the files in that directory............ If you plan on supplying the number of passes through the command line arguments, then you should also change the error-checking routine to reflect this.............Here's a rewrite of the script taking all these changes into account: Code:
#!/bin/sh |
Quote:
|
Geekster, what I'm imagining isn't a once-size fits all shred command, something more along the lines of codecs for video players... Do you consider mplayer to be bloated? ;)
You could configure shred at compile time with the filesystems you use... I'd hardly consider that bloat. Maybe it's time to pick up that C book again and learn the innards of my filesystems :-) --Shade |
Quote:
Quote:
But I just thought of a workaround (a variation of my idea above) which just might work on journaled or log-based filesystems.............Why not create an empty sparse file which fills up the remaining free space just before shredding the target files and removing that empty file afterward......... Creating an empty sparse file is a trivial matter using the 'dd' command, and is _much_ faster to create than creating a dummy file filled with zeros.................You would just need to know the remaining amount of free space (as the number of blocks) on the partition that contains the target files.................Which is where the 'df' command comes in handy......... For example, to find and isolate the amount of free space where the file to be shredded resides, using a block size of 512 bytes (the default size for 'dd'), you would run this command: Code:
df -B512 FILENAME | grep -v 'Filesystem' | awk '{ print $4 }' Code:
dd if=/dev/zero of=dummy count=0 seek=DFOUT I haven't tested this, but it seems like a feasible thing to do -- that is, if the filesystem in question will allow overwriting of files in place............Of course, this would be absolutely useless on any type of RAID setup since the purpose of RAID is redundancy, or where any kind of automatic backup system is in place which take snapshots every few minutes........... :) PS: I'm using the term "sparse" rather loosely.............As I understand it, a sparse file merely has isolated bits of data with large areas of non-data (zeros) scattered throughout the file..............Usually, a filesystem which can properly handle sparse files will not allocate any space to the empty portions, but only retain pointers to the valid data, thus saving on disk space..........Here, I'm allocating all the free space to one big empty file to use up all the available disk space without actually writing anything to the disk in order to save some time.............. |
Oh well, scratch that idea..................It won't work on reiserfs because reiserfs will treat that dummy file as a sparse file...................On my Root partition ( / ), I have about 1.3G free space................After creating that dummy sparse file, the 'ls -l' command shows it as being a 1.3G file, but the 'df' command still shows 1.3G of free space left................... :(
|
A sparse file on linux will show it's full size with ls -l or du, but only occupies the space of the data in the file. If you put the sparse file on windows it will OCCUPY it's full stated size.
|
All times are GMT -5. The time now is 11:29 PM. |