LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Security
User Name
Password
Linux - Security This forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.

Notices


Reply
  Search this Thread
Old 09-15-2012, 06:18 AM   #1
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Rep: Reputation: Disabled
Overwriting free space or overwriting single files restored by photorec


I'm trying to destroy some already deleted file on my hard drive. I can recover those files easily with photorec, so my first attempt was to create a file filled with zero bits until the partition was full. However, after deleting this file, photorec discovered every single file again. Uses ext4 (the fs type of the partition in question) run-length compression? In case this is so, the solution would be to create a file filled with random bits, but this takes way too long (I copied around 8MB per second) considering I only want to overwrite some MBs.

So my last try was to extract the positions of the files from the photorec logfile and apply dd only to some ranges where the files were located, this time using random bits:

If this was the file entry within photorec's report.xml
Code:
<fileobject>
  <filename>f0272760.txt</filename>
  <filesize>114</filesize>
  <byte_runs>
    <byte_run offset="0" img_offset="52139873280" len="114"/>
  </byte_runs>
</fileobject>
my dd command looked like this
Code:
dd if=/dev/urandom of=/dev/sda bs=1 count=114 seek=52139873280 conv=notrunc
This broke up my whole system.

Any ideas how I can destroy those files w/o creating a large random file? Maybe my dd command only needs some adjustments?

Hope you have some advice for me

Last edited by fcrok; 09-15-2012 at 06:21 AM.
 
Old 09-15-2012, 06:46 AM   #2
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
Quote:
Originally Posted by fcrok View Post
I'm trying to destroy some already deleted file on my hard drive. I can recover those files easily with photorec, so my first attempt was to create a file filled with zero bits until the partition was full. However, after deleting this file, photorec discovered every single file again.
My first thought is that something is wrong with this - it shouldn't be possible. When you say photorec discovered the files, could it actually recover the contents? Or was it just recovering the inodes with zeroed contents?

I assume you're deleting the files with rm, not with a file manager that uses a trashbin.

I assume the hard drive is a single partition.
 
Old 09-15-2012, 06:58 AM   #3
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Original Poster
Rep: Reputation: Disabled
I can recover the entire file (It is possible to view JPEGs and read text files). The files have been deleted using rm or removed from the trash.

My hard drive has three partitions, one for the root fs, one for the swap and one for the home folders, while I only recovered files from the home partition.
 
Old 09-15-2012, 07:43 AM   #4
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
Are all the partitions ext4? Do you have any special features like RAID or lvm in place? Are there compressed or encrypting file systems involved?

And ... is it a regular spinning hard disk, not SSD?

Note the output of df /home after deleting all these files;
then create your space-filling files of zeroes, and check df /home again ... it must say 0 available.

Unless you have weird mount options, creating a file with zeroes using dd should actually overwrite that much space. (Techniques like fallocate will NOT write anything to the disk.)

It wouldn't hurt to post the exact commands and output here. Since something is weird.
 
Old 09-15-2012, 07:48 AM   #5
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,781

Rep: Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081
Quote:
Originally Posted by fcrok View Post
My hard drive has three partitions, one for the root fs, one for the swap and one for the home folders, while I only recovered files from the home partition.
If you have partitions, your dd command should have used /dev/sdan (where n is the number matching your home partition, run mount to see which).

The shred command is designed specifically to do this kind of thing, but I think it can't help if you already removed the file.
 
Old 09-15-2012, 08:18 AM   #6
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Original Poster
Rep: Reputation: Disabled
df before creating file:
Code:
% df -h
Filesystem              Size  Used Avail Use% Mounted on
rootfs                   46G  5.5G   39G  13% /
dev                     1.7G     0  1.7G   0% /dev
run                     1.7G  372K  1.7G   1% /run
/dev/sda1                46G  5.5G   39G  13% /
shm                     1.7G  696K  1.7G   1% /dev/shm
tmpfs                   1.7G   24K  1.7G   1% /tmp
/dev/sda3               246G   17G  217G   8% /home
df after creating file is something like this:
Code:
% df -h
Filesystem              Size  Used Avail Use% Mounted on
rootfs                   46G  5.5G   39G  13% /
dev                     1.7G     0  1.7G   0% /dev
run                     1.7G  372K  1.7G   1% /run
/dev/sda1                46G  5.5G   39G  13% /
shm                     1.7G  696K  1.7G   1% /dev/shm
tmpfs                   1.7G   24K  1.7G   1% /tmp
/dev/sda3               246G  233G  0      8% /home
Code:
% fdisk -l /dev/sda

Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e52fd

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63    97659134    48829536   83  Linux
/dev/sda2        97659135   101562929     1951897+  82  Linux swap / Solaris
/dev/sda3       101562930   625142447   261789759   83  Linux
Code:
% mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=1773364k,nr_inodes=215719,mode=755)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/sda1 on / type ext4 (rw,relatime,data=ordered)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime)
binfmt on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/sda3 on /home type ext4 (rw,relatime,data=ordered)
As you see sda1 and sda3 are formatted with ext4 fs, sda2 is used as swap. I do not use anything like LVM or RAID and have no compressed or encrypted partitions as far as I know (But I am really sure that there is nothing like this).

I created the file using the following command:

Code:
% pwd
/home/fcrok
% dd if=/dev/zero of=largefile
I have no SSD but a spinning HD

@ntubski: Are you sure the photorec data refers to the selected partition rather than to the entire HD?

\edit: Information about LVM, RAID and fs types added

Last edited by fcrok; 09-15-2012 at 08:29 AM.
 
Old 09-15-2012, 08:44 AM   #7
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
Quote:
Originally Posted by fcrok View Post
df before creating file:
Code:
% df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda3               246G   17G  217G   8% /home
df after creating file is something like this:
Code:
% df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/sda3               246G  233G  0      8% /home
That's odd: your Use% hasn't changed. And Used+Avail does not add up to Size (whereas it does in the first case, otherwise I would say it is because of the 5% reserved by default by ext4).

btw this is one time where the -h option to df doesn't help!

Run
Code:
hexdump -C largefile
to make sure it's all zeroes
and
Code:
ls -l largefile
du largefile
to make sure it really uses that much space (as opposed to just having it allocated)
 
1 members found this post helpful.
Old 09-15-2012, 08:50 AM   #8
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
Quote:
Originally Posted by SecretCode View Post
because of the 5% reserved by default by ext4
Actually this could be something to do with it ... what is
Code:
sudo tune2fs -l /dev/sda3 | grep "Reserved block count"
Your largefile may not be allowed to fill the entire space because of the system's reserved block count
 
1 members found this post helpful.
Old 09-15-2012, 09:03 AM   #9
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Original Poster
Rep: Reputation: Disabled
tune2fs give the following:
Code:
% sudo tune2fs -l /dev/sda3 | grep "Reserved block count"
Reserved block count:     3272371
Please note that the difference in the second df snippet was just guessed because I deleted the file, otherwise I wasn't able to start XFCE. I will run hexdump and du as soon as I've recreated the file.

Oh, I almost forgot: Big thank for your help
 
Old 09-15-2012, 09:16 AM   #10
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
Quote:
Originally Posted by fcrok View Post
tune2fs give the following:
Code:
% sudo tune2fs -l /dev/sda3 | grep "Reserved block count"
Reserved block count:     3272371
Please note that the difference in the second df snippet was just guessed because I deleted the file, otherwise I wasn't able to start XFCE. I will run hexdump and du as soon as I've recreated the file.
Not very sporting to post inaccurate output

Trying setting reserved blocks to zero, temporarily. Read up on tune2fs (because I'm not sure of the exact syntax and possible gotchas) ... something like
Code:
sudo tune2fs -m 0 /dev/sda3

Quote:
Originally Posted by fcrok View Post
Oh, I almost forgot: Big thank for your help
There's a button for that
 
Old 09-15-2012, 10:19 AM   #11
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,781

Rep: Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081Reputation: 2081
Quote:
Originally Posted by fcrok View Post
@ntubski: Are you sure the photorec data refers to the selected partition rather than to the entire HD?
Good point, I only guessed that based on your system breaking after the dd command. Looking at your fdisk numbers the photorec offset would end up in the home partition even counting from the beginning of the HD, so I can't explain how your system broke anyway...
 
Old 09-15-2012, 10:41 AM   #12
SecretCode
Member
 
Registered: Apr 2011
Location: UK
Distribution: Kubuntu 11.10
Posts: 562

Rep: Reputation: 102Reputation: 102
If another file was allocated there after the photorec run and before the dd ... any dot-file for any app or the desktop environment ... it could very easily cause instability.

Using dd like that is always going to be dangerous. And it shouldn't be necessary for this requirement.
 
Old 09-15-2012, 10:52 AM   #13
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Original Poster
Rep: Reputation: Disabled
Maybe there is a tool which checks whether an inode refers to a given part of the partition or not? This could make my dd command safe.
 
Old 09-15-2012, 11:20 AM   #14
fcrok
LQ Newbie
 
Registered: Sep 2012
Distribution: archlinux
Posts: 10

Original Poster
Rep: Reputation: Disabled
Okay I made a new file, 217GB of size and this is the output of du largefile and ls -l largefile:

Code:
% du largefile
226920920	largefile
% ls -l largefile 
-rw-r--r-- 1 fcrok users 232366993408 Sep 15 18:11 largefile
hexdump needs some time to read so I will add this later, but currently the output is

Code:
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
I will now set reserved block count to zero, fill up the left space and run photorec again.
 
Old 09-15-2012, 11:22 AM   #15
OlRoy
Member
 
Registered: Dec 2002
Posts: 306

Rep: Reputation: 86
I never used PhotoRec, but what exactly did you choose here?

Quote:
Carve the partition or unallocated space only

from the whole partition (useful if the filesystem is corrupted) or
from the unallocated space only
Did you delete the files, restore them to /home, and then eventually run PhotoRec again and select "from the whole partition"? If so you should have chosen "from the unallocated space only."

Last edited by OlRoy; 09-15-2012 at 11:23 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] overwriting files that are being operated on captainentropy Programming 11 09-19-2011 10:37 AM
[SOLVED] rsync keeps overwriting some unmodified files figure002 Linux - Software 4 01-08-2010 01:16 PM
How to copy files without overwriting? Stephan_Craft Linux - Newbie 7 02-17-2009 05:28 AM
Please Help! Ubuntu keeps overwriting system files! ckr Ubuntu 2 07-13-2006 05:51 PM
php/ftp overwriting files.. prob. collen Programming 1 03-14-2005 04:56 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Security

All times are GMT -5. The time now is 02:41 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration