security newbie, but not Linux newbie. advice on secure delete tools
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
security newbie, but not Linux newbie. advice on secure delete tools
Hi there fellow Linux users!
Please excuse my English.
I'm a semi-experienced Linux system administrator that
has only recently been given security-specific
responsibilities. I feel a little out of my league! My
department recently got a request to come up with a
file encryption and secure delete solution for Linux/
UNIX "like we have for Windows."
For almost all tools, I know a great open source
For encryption, I feel comfortable recomennding gnupg.
But for "secure file deletion," I am confused. This
tool is not for erasing the hard drive, but just
individual files. AND it has to conform to DoD
I would *really* like to use an open source tool here,
but I don't have to. I'm just having a hard time
knowing if I'm in the parameters of DoD. Most tools
don't seem to mention their DoD compliance. I have
read many interpretations of what DoD 5220.22-M means,
but some things seem inconsistent.
Without compromising your place of work, what's your
favorite secure/ file delete tool?
i know this is probably a common question, but I did a search and did not find what I was looking for.
shred is the GNU one, so you may have it already installed... check the manpage
Its limitation applies to almost all file-wiping tools out there because these journaled filesystems keep meta-data (such as checksums) to detect/prevent corruption, etc...
To overcome these limitations they would need to hack these blocks on a filesystem basis
If the data is sensitive enough, you could try to wipe the free space on the partition immediately after... You must use the root account (because of the 5% filesystem blocks reserved to the super-user). Just be careful...
Perhaps in the future someone adds functionality to the API in the kernel that handles these partitions to perform the overwrite on data and metadata...
If you find some tools that may be used on these filesystems, please let me know.
Neither wipe nor SecureDelete treat these specially
The effectiveness of any "secure delete" facility depends in part upon the file system being used, how it does cacheing, handles temporary files, and so-forth. As far as I know, the usual filesystems for Unix will work as expected.
Could anyone explain why tools like shred don't work for journalized file system? AFAIK the journalized file system keeps metadata info about the file, not the content of the file, so when I re-write the file 20 times and then delete it, the initial content of the file should not be recovered.
Please correct me.
These filesystems are not block-oriented (they won't work as expected). They use trees and hashes to speed up access, so they don't guarantee that data will be overwritten "in place", that is: the blocks that contain it may be relocated.
Conventional filesystems store files in whole blocks. Roughly speaking, this means that on average half a block of space is wasted per file because not all of the last block of the file is used. If a file is much smaller than a block, then the space wasted is much larger than the file. It is not effective to store such typical database objects as addresses and phone numbers in separately named files in a conventional filesystem because it will waste more than 90% of the space in the blocks it stores them in. By putting multiple items within a single node in Reiser4, we are able to pack multiple small pieces of files into one block.
File-wiping tools are block oriented. They always round up the size to the multiple of the block-size (returned by statvfs(2) in f_bsize). Almost all of them don't detect the filesystem being used, so small files and the last block of big files (if < block size) will have a even lesser chance of be overwritten.
Another dangerous feature of these filesystem is:
The idea of steal-on-capture optimization is that only the last committed transaction to modify an overwrite block actually needs to write that block. Other transactions can skip post-commit that block. This optimization, which is also present in ReiserFS version 3, means that frequently modified overwrite blocks will be written less than two times per transaction.
So, file-wiping on these filesystems isn't guaranteed. This means that you must take additional precautions to be sure that your data isn't there anymore: tweak the filesystem options, encryption, overwriting free-space, use ext2 on your /home partition, etc...
Maybe someone in the feature adds functionality to these filesystems to permit true file-wiping...
On reiserfs, almost impossible... unless the developers implement the feature in new versions (or someone patch it). They'd never backport to old versions:
V3 of reiserfs is used as the default filesystem for SuSE, Lindows, FTOSX, Libranet, Gentoo, Xandros and Yoper. We don't touch the V3 code except to fix a bug, and as a result we don't get bug reports for the current mainstream kernel version.
The problem gets worse if the file in question was modified >1 times. This data may be anywhere.... It is because reiserfs tries hard to save space, and even the tails of multiple files and small files may share the same block... thus relocating data at any time if one tail grows or decreases, a file is deleted, etc...
This filesystem focusses so hard on speed that I really don't know if they would ever want to add a wipe function to discarded data. They would have to call it multiple times when retouching the panorama
I believe that any tools that try to hack this mess should unmount the partition first to avoid interference from the filesystem code. The system calls in the Linux API related to files (ie, open, read, unlink, etc) are handled by the proper filesystem module. Low-level filesystem hacking isn't trivial... and in any way, this tool should scan the whole partition looking for data that isn't assigned to the filesystem and overwrite it... Not to mention the fact that, sometimes, data may be written 2 times (one to the journal, another to disc).
The code to reiserfs is there and it's open-source. Maybe some time...
File-wiping is tricky. There's the problem too of bypassing those blocks that the filesystem considers as "bad blocks", which is different from the list that may be stored on the hard drive itself... For this, use dban at dban.sf.net
Anyway, I think the best solution will always be encryption...
Setup a pseudo-partition on a file and use dm-crypt, cryptoloop, etc..
Also, you may setup a cron job that periodically wipes free space...
I do it sometimes with swap, first creating a temporary swap to use while the former is being processed.
PS: Perhaps fsync(2) may overcome the "Copy-on-capture" optimization...