[SOLVED] Overwriting free space or overwriting single files restored by photorec
Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Overwriting free space or overwriting single files restored by photorec
I'm trying to destroy some already deleted file on my hard drive. I can recover those files easily with photorec, so my first attempt was to create a file filled with zero bits until the partition was full. However, after deleting this file, photorec discovered every single file again. Uses ext4 (the fs type of the partition in question) run-length compression? In case this is so, the solution would be to create a file filled with random bits, but this takes way too long (I copied around 8MB per second) considering I only want to overwrite some MBs.
So my last try was to extract the positions of the files from the photorec logfile and apply dd only to some ranges where the files were located, this time using random bits:
If this was the file entry within photorec's report.xml
I'm trying to destroy some already deleted file on my hard drive. I can recover those files easily with photorec, so my first attempt was to create a file filled with zero bits until the partition was full. However, after deleting this file, photorec discovered every single file again.
My first thought is that something is wrong with this - it shouldn't be possible. When you say photorec discovered the files, could it actually recover the contents? Or was it just recovering the inodes with zeroed contents?
I assume you're deleting the files with rm, not with a file manager that uses a trashbin.
I can recover the entire file (It is possible to view JPEGs and read text files). The files have been deleted using rm or removed from the trash.
My hard drive has three partitions, one for the root fs, one for the swap and one for the home folders, while I only recovered files from the home partition.
Are all the partitions ext4? Do you have any special features like RAID or lvm in place? Are there compressed or encrypting file systems involved?
And ... is it a regular spinning hard disk, not SSD?
Note the output of df /home after deleting all these files;
then create your space-filling files of zeroes, and check df /home again ... it must say 0 available.
Unless you have weird mount options, creating a file with zeroes using dd should actually overwrite that much space. (Techniques like fallocate will NOT write anything to the disk.)
It wouldn't hurt to post the exact commands and output here. Since something is weird.
My hard drive has three partitions, one for the root fs, one for the swap and one for the home folders, while I only recovered files from the home partition.
If you have partitions, your dd command should have used /dev/sdan (where n is the number matching your home partition, run mount to see which).
The shred command is designed specifically to do this kind of thing, but I think it can't help if you already removed the file.
% fdisk -l /dev/sda
Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e52fd
Device Boot Start End Blocks Id System
/dev/sda1 * 63 97659134 48829536 83 Linux
/dev/sda2 97659135 101562929 1951897+ 82 Linux swap / Solaris
/dev/sda3 101562930 625142447 261789759 83 Linux
Code:
% mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=1773364k,nr_inodes=215719,mode=755)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/sda1 on / type ext4 (rw,relatime,data=ordered)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
binfmt on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/sda3 on /home type ext4 (rw,relatime,data=ordered)
As you see sda1 and sda3 are formatted with ext4 fs, sda2 is used as swap. I do not use anything like LVM or RAID and have no compressed or encrypted partitions as far as I know (But I am really sure that there is nothing like this).
I created the file using the following command:
Code:
% pwd
/home/fcrok
% dd if=/dev/zero of=largefile
I have no SSD but a spinning HD
@ntubski: Are you sure the photorec data refers to the selected partition rather than to the entire HD?
\edit: Information about LVM, RAID and fs types added
% df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 246G 17G 217G 8% /home
df after creating file is something like this:
Code:
% df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 246G 233G 0 8% /home
That's odd: your Use% hasn't changed. And Used+Avail does not add up to Size (whereas it does in the first case, otherwise I would say it is because of the 5% reserved by default by ext4).
btw this is one time where the -h option to df doesn't help!
Run
Code:
hexdump -C largefile
to make sure it's all zeroes
and
Code:
ls -l largefile
du largefile
to make sure it really uses that much space (as opposed to just having it allocated)
Please note that the difference in the second df snippet was just guessed because I deleted the file, otherwise I wasn't able to start XFCE. I will run hexdump and du as soon as I've recreated the file.
Please note that the difference in the second df snippet was just guessed because I deleted the file, otherwise I wasn't able to start XFCE. I will run hexdump and du as soon as I've recreated the file.
Not very sporting to post inaccurate output
Trying setting reserved blocks to zero, temporarily. Read up on tune2fs (because I'm not sure of the exact syntax and possible gotchas) ... something like
@ntubski: Are you sure the photorec data refers to the selected partition rather than to the entire HD?
Good point, I only guessed that based on your system breaking after the dd command. Looking at your fdisk numbers the photorec offset would end up in the home partition even counting from the beginning of the HD, so I can't explain how your system broke anyway...
If another file was allocated there after the photorec run and before the dd ... any dot-file for any app or the desktop environment ... it could very easily cause instability.
Using dd like that is always going to be dangerous. And it shouldn't be necessary for this requirement.
from the whole partition (useful if the filesystem is corrupted) or
from the unallocated space only
Did you delete the files, restore them to /home, and then eventually run PhotoRec again and select "from the whole partition"? If so you should have chosen "from the unallocated space only."
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.