LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Security (https://www.linuxquestions.org/questions/linux-security-4/)
-   -   security newbie, but not Linux newbie. advice on secure delete tools (https://www.linuxquestions.org/questions/linux-security-4/security-newbie-but-not-linux-newbie-advice-on-secure-delete-tools-346789/)

mattie_linux 07-25-2005 11:37 PM

security newbie, but not Linux newbie. advice on secure delete tools
 
Hi there fellow Linux users!

Please excuse my English.

I'm a semi-experienced Linux system administrator that
has only recently been given security-specific
responsibilities. I feel a little out of my league! My
department recently got a request to come up with a
file encryption and secure delete solution for Linux/
UNIX "like we have for Windows."

For almost all tools, I know a great open source
option.
For encryption, I feel comfortable recomennding gnupg.
But for "secure file deletion," I am confused. This
tool is not for erasing the hard drive, but just
individual files. AND it has to conform to DoD
5220.22-M standards.

I would *really* like to use an open source tool here,
but I don't have to. I'm just having a hard time
knowing if I'm in the parameters of DoD. Most tools
don't seem to mention their DoD compliance. I have
read many interpretations of what DoD 5220.22-M means,
but some things seem inconsistent.

Without compromising your place of work, what's your
favorite secure/ file delete tool?

i know this is probably a common question, but I did a search and did not find what I was looking for.
thanks,
mattie

primo 07-26-2005 01:04 AM

There's wipe at http://wipe.sf.net/
and THC's SecureDelete at www.thc.org

THC's is best. It's a suite of four commands:
srm - for files
sswap - swap
smem - memory
sfill - fill the hard drive

wipe has many interesting options, but it's only for files

Linux~Powered 07-26-2005 02:41 PM

I use bcwipe. Works great...

Code:

bcwipe -mg -v yourfile
bcwipe

jonaskoelker 07-26-2005 02:49 PM

see also: shred.

It's somewhat limited, though--won't work for journaling FSes; I don't know anything about the other suggested programs, so I can't say if (and how) they deal with journaling FSes.

hth --Jonas

primo 07-28-2005 01:53 AM

shred is the GNU one, so you may have it already installed... check the manpage

Its limitation applies to almost all file-wiping tools out there because these journaled filesystems keep meta-data (such as checksums) to detect/prevent corruption, etc...
To overcome these limitations they would need to hack these blocks on a filesystem basis

If the data is sensitive enough, you could try to wipe the free space on the partition immediately after... You must use the root account (because of the 5% filesystem blocks reserved to the super-user). Just be careful...

Perhaps in the future someone adds functionality to the API in the kernel that handles these partitions to perform the overwrite on data and metadata...


If you find some tools that may be used on these filesystems, please let me know.
Neither wipe nor SecureDelete treat these specially

tireseas 07-29-2005 12:20 PM

I use shred -fu <file-to-be-deleted> and that seems to work fine and I am using the reisersfs.
How would I go about double-checking to see whether or not this is actually working as I think it is?

primo 07-29-2005 02:06 PM

As root, run strings(1) on the partition (if the file was text) and grep(1) for some text

neo77777 08-02-2005 10:08 PM

There also srm - it writes garbage to a file before unlinking it, so if you want to restore the file you will not get its original contents.
http://srm.sourceforge.net/

sundialsvcs 08-03-2005 07:16 AM

The effectiveness of any "secure delete" facility depends in part upon the file system being used, how it does cacheing, handles temporary files, and so-forth. As far as I know, the usual filesystems for Unix will work as expected.

ddaas 08-03-2005 10:22 AM

Could anyone explain why tools like shred don't work for journalized file system? AFAIK the journalized file system keeps metadata info about the file, not the content of the file, so when I re-write the file 20 times and then delete it, the initial content of the file should not be recovered.
Please correct me.

primo 08-03-2005 03:40 PM

These filesystems are not block-oriented (they won't work as expected). They use trees and hashes to speed up access, so they don't guarantee that data will be overwritten "in place", that is: the blocks that contain it may be relocated.

The following quotes are from the reiserfs documentation at http://www.namesys.com/

Quote:

Sharing Blocks Saves Space

Conventional filesystems store files in whole blocks. Roughly speaking, this means that on average half a block of space is wasted per file because not all of the last block of the file is used. If a file is much smaller than a block, then the space wasted is much larger than the file. It is not effective to store such typical database objects as addresses and phone numbers in separately named files in a conventional filesystem because it will waste more than 90% of the space in the blocks it stores them in. By putting multiple items within a single node in Reiser4, we are able to pack multiple small pieces of files into one block.
File-wiping tools are block oriented. They always round up the size to the multiple of the block-size (returned by statvfs(2) in f_bsize). Almost all of them don't detect the filesystem being used, so small files and the last block of big files (if < block size) will have a even lesser chance of be overwritten.

Another dangerous feature of these filesystem is:
Quote:

Journalling optimizations
Copy-on-capture

The idea of steal-on-capture optimization is that only the last committed transaction to modify an overwrite block actually needs to write that block. Other transactions can skip post-commit that block. This optimization, which is also present in ReiserFS version 3, means that frequently modified overwrite blocks will be written less than two times per transaction.
So, file-wiping on these filesystems isn't guaranteed. This means that you must take additional precautions to be sure that your data isn't there anymore: tweak the filesystem options, encryption, overwriting free-space, use ext2 on your /home partition, etc...

Maybe someone in the feature adds functionality to these filesystems to permit true file-wiping...

ddaas 08-04-2005 01:59 AM

So, there are no secure-erase tools which work as expected on journalized file systems?

primo 08-04-2005 04:44 AM

On reiserfs, almost impossible... unless the developers implement the feature in new versions (or someone patch it). They'd never backport to old versions:
Quote:

V3 of reiserfs is used as the default filesystem for SuSE, Lindows, FTOSX, Libranet, Gentoo, Xandros and Yoper. We don't touch the V3 code except to fix a bug, and as a result we don't get bug reports for the current mainstream kernel version.
The problem gets worse if the file in question was modified >1 times. This data may be anywhere.... It is because reiserfs tries hard to save space, and even the tails of multiple files and small files may share the same block... thus relocating data at any time if one tail grows or decreases, a file is deleted, etc...

This filesystem focusses so hard on speed that I really don't know if they would ever want to add a wipe function to discarded data. They would have to call it multiple times when retouching the panorama

I believe that any tools that try to hack this mess should unmount the partition first to avoid interference from the filesystem code. The system calls in the Linux API related to files (ie, open, read, unlink, etc) are handled by the proper filesystem module. Low-level filesystem hacking isn't trivial... and in any way, this tool should scan the whole partition looking for data that isn't assigned to the filesystem and overwrite it... Not to mention the fact that, sometimes, data may be written 2 times (one to the journal, another to disc).

The code to reiserfs is there and it's open-source. Maybe some time...

File-wiping is tricky. There's the problem too of bypassing those blocks that the filesystem considers as "bad blocks", which is different from the list that may be stored on the hard drive itself... For this, use dban at dban.sf.net

Anyway, I think the best solution will always be encryption...
Setup a pseudo-partition on a file and use dm-crypt, cryptoloop, etc..

Also, you may setup a cron job that periodically wipes free space...
I do it sometimes with swap, first creating a temporary swap to use while the former is being processed.

PS: Perhaps fsync(2) may overcome the "Copy-on-capture" optimization...

ddaas 08-05-2005 02:30 AM

the test I've made is relevant.

I write some text in a file, then #shred file_name and then #strings /dev/hda1 | grep 'mytext'
I could find the text.
The test I've made on ext3 file system.


So there is no secure delete with classical tools on journalized file system...

int0x80 08-05-2005 10:36 AM

DBAN (http://dban.sourceforge.net/)

Wipe Methods
Quick Erase
Canadian RCMP TSSIT OPS-II Standard Wipe
American DoD 5220-22.M Standard Wipe
Gutmann Wipe
PRNG Stream Wipe
http://dban.sourceforge.net/features.html


All times are GMT -5. The time now is 09:30 AM.