Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Mint Xfce, Korora Gnome3, Ubuntu Server NoGui,
Posts: 136
Original Poster
Rep:
about wipe
I would use urandom because I think wipe suffers from the same issue shred does. I believe it was mentioned by someone earlier. It has to do with modern HDD's and using journaled fs. Here is a link to the warning on wipes man page. http://manpages.ubuntu.com/manpages/...n1/wipe.1.html
I did try running other utilities, and recently on two different versions of the same distro. Two very different versions. Lucid Lynx version of Ubuntu running a Gnome Desktop Environment, and everything worked perfectly, the three files were erased and the exact amount of free space was recovered, and the wiped files were gone and nothing short of taking the drives and having them professionally recovered was going to bring the data back. The Oneric Ocelot Ubuntu running a KDE Desktop Envrionment it takes two consecutive wipes before the space is completely recovered. Also doing some digging, I've been unable to find similar issues related to much older kernels. There also seems to be a connection between whether or not the kernel is 64bit or 32bit as to whether or not it does this. Other than general bug problems which I created, my laptop doesn't have this issue it's a 32bit Kernel, the test machine which I'm able to reproduce this on is 64 bit, and there are plenty of other weird and odd ball issues with the 64 bit kernels.
Aditionally it seems to be much much less likely to require consecutive wipes in an EXT3 versus EXT4 when it can be reproduced. The man who let me borrow his machine to run these tests thinks that if I had enough time and distro coppies I'd find that this happens more offten on Debian or Debian Clone Distros rather than Fedora or Fedora Clone Distros. I don't have all the distros to test this, but perhaps someone in the bloggosphere does have multiple distros and the time to install them and test his theory.
Distribution: Mint Xfce, Korora Gnome3, Ubuntu Server NoGui,
Posts: 136
Original Poster
Rep:
Nice. Mine is 32 bit that the problem is on but it is also ext4 and I think ext4 is a pretty big factor. I was going back and forth with Andrew who i think is one of the devs over at bleach bit. He said it had something to do with fsync and how he used it in the code and the link I left earlier is a patch for that however I had some other effects I didn't like using the patch. He said it was a temp fix and still needed some work so they are aware of this. I also pointed out it is still even with the patch not writing data to parts of the free space during the wipe. I demonstrated this by making a cron job that shows df -h every minute while dd runs and while bleach bit runs. Here is the funding. First df reports 95G of used space 140G of total space on the portion and 38G of available space. As you can see there is a 7G difference between what's available and the diff between used and total. It's in this 7G that most of my issue was. With dd the cron job showed that after avail space was 0 it kept writing to disk until total used matched total drive space. With bleach bit it stopped writing when it reached 0 space available which is at 133G used leaving that 7G gap from used and total. Now this probably doesn't come into play often as maybe the OS doesn't usually write to this 7G making this random or only when disk is pretty full that this problem occurs. Remember it was the same images that were recoverable for me initially and multiple wipes did nothing so for me I think those files were from previous OS's but happened to be residing in that mysterious 7G gap or that 7G gap is the 5% that the journal uses and is generally protected.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.