LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   disapointing bleachbit results: READ (https://www.linuxquestions.org/questions/linux-software-2/disapointing-bleachbit-results-read-4175450568/)

minty33 02-17-2013 01:44 PM

disapointing bleachbit results: READ
 
.I am writing this to inform as well as get feedback and see if others can duplicate my results. Personally I have used bleachbit for a long time and just assumed it worked well given all the recommendations I've seen for it. I know how to use it in root and as a user and actually run it fairly often. Today I ran onto an issue on a doc that libre office couldn't open because it was corrupted. I needed this document for school so trying everything I ran foremost from the command line to see if I could recover an older version. Given my use of bleach bit I thought this was a lost cause but to my surprise a ton of documents were recovered. After seeing this I ran it on avi and jpg files and again it recovered them. I'm here to say the bleachbit secure wipe feature does not work based on these results. If you use bleach bit and assumed it works try downloading and running foremost on a file type of choice and be prepared to be surprised/disappointed.

Let me know your opinions and experience.

Lesson for me is this is why I never trust/use programs that seem to do too much. I should have just stuck with shred and dd but I wanted an easy all in one solution that I thought bleachbit offered but to me this proves again that all in one usually means "jack of all trades master at none".

jcgrin 02-17-2013 10:33 PM

Hold on, let me make sure I'm reading this right, you ran a command from a shell prompt as a standard user or as root and it recovered files that Bleachbit reported destroyed. I've personally never been able to do that and at times I've wanted to. I'd be interested to know if any other Bleach Bit users areable to duplicate your results. I can't.

minty33 02-18-2013 06:28 AM

I ran foremost with sudo so yes as root. After posting on bleach bits site as well I'm starting to think maybe I was unclear on how foremost worked. My assumption was it finds deleted files on an image/drive. If that's the case my original point stands. If it finds all files including delete ones I may have mispoke. Let me clarify. I ran foremost as follows:

foremost -t jpg /dev/sda2 -o ~/recovered

With that command I have also done it with doc's and avi's. What I get back is a ton of stuff except for docs I only get like 7 of them which I have more than that on my drive so that's why I thought its just searching free space. Anyway the odd thing is most of the jpg's look like icons and images for webpages ect...
And most of the avi's are either images for some reason or quick animations like a file transfer animation for an OS.
What I'm thinking is maybe these are part of gtk or something however there still are odd ones like some windows icons and some screenshots of Google and security sites I don't recognize as ever visiting. I assumed they might have been from the person who owned this before since the laptop was given to me a year ago and they ran windows and I never have. The whole time I had it I ran bleach bit as a user and root so you can imagine this surprises me if in fact it isn't finding images on the non free portion of the disk. To test this I'm gonna download sluethkit to run a tool called dls that makes an image of free space only then pipe that to foremost so I know for a fact its just doing free space. I'll update after I do that today or tommorow. bTW I used bleach bit to shred the folder of the recovered files and ran foremost again and the files were still there unfortunately their original location aren't reported by foremost so all I now is they exist or existed somewhere on sda2. Any input on foremost or in general is welcomed.

minty33 02-18-2013 09:30 AM

follow up
 
OK I used blkls which is the new name for the DLS tool in the sluethkit. This tool can read raw data from unallocated space on an image/drive. I piped this to foremost to make sure it was just checking free disk space. Here is the command

blkls -A /dev/sda2 | foremost -t jpg /Dev/sda2 -o ~/recovered

After this the same jpg images as before show and I ran bleachbit on my / , /root , /home /home/"myhomedir" manually as well as auto via check box for wipe free disk space. I also used shred feature to delete the recovered folder containing these images from my previous foremost results. I think its safe to say bleach bit is not doing its job in my case. If you think my methodology is flawed please tell me where as I would like to resolve this as much as anyone

jcgrin 02-18-2013 12:24 PM

It doesn't work on my personal machine, I'm going to try this on another machine which has a different setup and function to see if this is reproducable there. For me on my laptop some things never have functioned normally, and at the moment I'm currently not in a position to solve all of the instabilities so I can't say if this is Bleachbit doing what it says it does or just my machine acting up. Someday soon there will be a HDD replacement, and a fresh install.

minty33 02-18-2013 01:38 PM

My guess is it's not bleachbit as a whole because I can't imagine it flat out doesn't work in most cases but there is something causing it not to function perfectly at least in this one case so I think its worth finding out why or at least what I have going on that stops it from wiping certain things. I do know not all my stuff is recoverable but why anything is I don't get. It's gotta be something obvious because almost all the jpg's look like website images or images from PDF books or icons. No family pics etc.. are recovered just these same random ones including some windows icons and animations. I may be reaching here but is it possible its remnants of a windows virus from a previous install and maybe those blocks have some how been protected on a more fundamental way therefor making them unwritable by bleach bit or any other OS or software. Is that even possible?

jcgrin 02-21-2013 01:34 PM

Here's what I was able to duplicate on another machine, using three junk ten meg files. Test one I allowed BleachBit to remove the files, and over-write the cleaned space, foremost was able to recover two of the files and a partial of the third. Test two I allowed Blechbit to run, and clean the space, and then ran it again, foremost was unable to recover any files. Test three, four, five, and six, I erased the files via the trashcan and then ran bleachbit in the first two tests Formost recovered all three files, in the fifth test no files were recovered, and in the sixt test only a single file was partially recovered. It seems like Bleachbit is doing it's job most of the time, but for some reason there are circumstances in which it doesn't, or in which the job is not complete. An associate of mine suggests that there may be something holding onto the data for whatever reason and the fact that Bleachbit can't touch it generates an obscure Bleachbit Error that may or may not show up depending on configuration. As an interesting item of note, in no test was bleachbit able to recover the complete thirty megs at once only after running it multiple times. This was run on a Ubunto 12.04LTS server box with a KDE desktop environment. Other flavors and configurations may yeild different results.

minty33 02-21-2013 02:02 PM

nice test
 
Nice job jcgrin. I have also noticed that it's more inconsistent more so than always failing or always working. I have also tested this by using diff folders and combinations to wipe but I see no real correlation. Something else I noticed was it worked better on doc and video files than jgp's. Certain jpegs on my os I can't get rid of. I am in school and I do a work study program so my time is limited but I am going to try and use dd to wipe free space to see if the hardware is holding onto the data or I bleach bit is just missing it. If dd doesn't do it then I have to wonder how this data is being "protected" or certain addresses are hidden or missing from the inode table completely. I would say your results do support my claim that there is a problem just gotta see if it is only bleachbit or if over utility's and dd have the same issue.

gdejonge 02-21-2013 06:21 PM

Bleachbit probably suffers from the same limitation that the shred tool does. See the man page for shred and look for the CAUTION notes.
Short story: if the file-system does not do a write in-place, it will write to new sectors on the disk and will leave the original data intact.

selfprogrammed 02-21-2013 06:47 PM

This really depends upon what other tools were accessing those files.
Most editors will edit to a new temp file, then when saved it will delete the old file and rename the new to replace it.
Any tool that behaves like this will leave backup files, temp files, and discarded copies that a simple erase program will not find.

I do not know bleachbit, but I suspect that it does the same as erase. It overwrites the file in-place on the disk until the data is unrecoverable.

Unless it has some special operation that hunts down all backup files of similar names, then those will be recoverable by any undelete.
If the editor uses temp files, those do not even have a recognizable name
(i.e. Fj387xk-name and such), but they will contain a copy of your file being edited.

gdejonge 02-22-2013 12:42 AM

I was not refering to the fact that an application does a in-place write, at least from the application point of view it will, but if the file-system will do. A lot file-systems wont. A very good example is btrfs, which does COW (copy on write), which means as soon as you try to write to a file it will copy the block you're writing and update the copied block, leaving the original block unaltered.
The only way to make sure all your files are really cleaned, is to shred or bleachbit the partition, because in that case you will by-pass the file-system.

minty33 02-22-2013 05:58 AM

update and comments
 
I just used "dd if=/dev/zero of=/home/junk.file" to fill empty space followed by a "shred -z /home/junk.file" and then ran foremost for those pesky jpg's and for the first time I got 0 recovered files so it's definitely bleachbit that is inconsistent. As for the recent comments it is possible bleachbit suffers from what shred does because i'm using ext.4, and that is journaled however two things first, if we are talking about unallocated space with foremost(because I'm piping from blkls -A) to search for recoverable files only in that unallocated space then wouldn't that not matter since I'm wiping all unallocated space with bleach bit not just certain files. I guess it could be making the copies in the space just written to by bleachbit and that's why it's inconsistent because sometimes the FS writes "ahead" of where bleachbit is writing and sometimes "behind" where it already wrote making the latter recoverable.
Secondly would the fact dd worked mean that the previous statement is false or would it mean that dd isn't affected by the fact it's a modern journaled FS?

jcgrin 02-22-2013 11:59 AM

There still seems to be something telling Bleachbit it can't access certain files or bits of files, in my tests with the three ten mg files if Bleachbit never once retrieved all 30Mb of diskspace in one shot, it always took two and in one case three passes to recover the space. I tried getting bleachbit to show output while it ran but this was a complete fail. I only got pages of nonsense, that might make sense to a computer but not to me.

minty33 02-22-2013 12:26 PM

interesting note
 
Hey jcgrin, when you said it took more than 1 or 2 wipes to recover the 30gigs I remembered something interesting from my successfully dd wipe. The file to fill the free disk space was 56GB and my file manager in Linux reported 49GB of free space before running dd and again after deleting the junkfile. Yes both were capitol GB so it wasn't gigabits vs gigabytes you know the old rounding to 1000 instead of 1024 issue. What this says is even the OS is not seeing all the unallocated space but the dd utility must just write till there is no room and not use the OS to get its device information. In other words dd doesn't care about how much space or where the space is. I wonder if bleachbit uses info from the OS to determine what and where it can write and since the OS doesn't report some existing space then bleach bit doesn't touch it. I'm curious if you ran dd just as I did from my previous post if the file is bigger than your file manager says your free space was prior to making the junk file. Just shred or remove the file when your done like I wrote to get your space back. If you want test it with known deleted files and after dd and shred of the created junk file see if foremost recovers anything. My guess is it will be a perfect wipe and your junk file will be bigger than the amount of space you thought you had.

minty33 02-23-2013 06:26 AM

another follow up
 
I said before I had the same thread going on the bleachbit support forum and after some back and forth I believe they recognize the problem. Here is a temporary fix according to them. I have not tried it since I read it 5 minutes ago but figured you would want to try it also. Jcgrin if you didn't see my previous post check it out and try that to.

FROM BLEACHBIT:
Reply to comment
fixed?
Submitted by andrew on Sat, 02/23/2013 - 01:59.
Those "permission denied" errors are not important for the data remanence issue.
I developed an (automated) unit test, found the method in BleachBit version 0.9.5 was not completely effective, and improved it. There is still more work to do, but the changes so far may solve your problem.
For now you can test it yourself by overwriting FileUtilities.py with revision 2885 SVN (http://sourceforge.net/p/bleachbit/c....py?format=raw).
If you installed BleachBit from a repo or .deb file, this file goes somewhere in /usr (do a search for FileUtilities.py). Let me know how it goes.
Wiping free disk space is most effective as root because, by default, file systems such as ext3 and ext4 reserved 5% of the space for privileged accounts. If all the space is consumed, the log in the terminal should look like this:
note: wrote 119 files and 3619840 bytes in 196 seconds at 0.02 MB/s
note: 0 bytes available to non-super-user
note: 0 bytes available to super-user
As you can see, right now it is rather slow because it calls fsync to flush to the disk.


All times are GMT -5. The time now is 06:59 PM.