When using forensic tools like foremost or photorec (part of the testdisk package), the application looks at headers and footers of files regardless of the file system format. Even if partition information has been changed, as long as the free space created when you removed the files is not used again.
Windows file systems use an "even wear" strategy, where it will keep writing files to the oldest contiguous set of clusters using up all free space on a new drive before writing to free space created by deleting files over time.
Some Mac file systems implement this also, but I believe some if not all Linux file systems allocate files differently where you can overwrite an area recently deleted regardless of the time it became free.
As suggested above, make an image of the drive, once you have an image you can continue using the computer without fear of overwriting recoverable data. The best way to dig for data is to make a copy of the image and work on the copy, and you'll always have the original to fall back on for whatever reason. If you don't have a larger drive to store it on for mounting, you can make the image in slices, like 4GB slices that can be stored on a Fat32 file system which has a 4GB file size limit, or 650MB slices that can be stored on many CDs. There are many applications available that can make the image in slices, then mount them for forensic analysis/data recovery just like mounting the drive in the slave position when booting from another drive. Or you can make the slices with a specialized application and mount them with a mounting application.
One of the best applications that does this is ENCASE, there should be a way to mount slices in Linux that I'm not aware of. So far I've only mounted hard drive images in it's entire size, not slices, I have only read about slices.
Foremost is kind of nice as you can easily customize the configuration file, it has some common file extension header, and in some cases, header and footer (file signatures) information.
When you open a file in a hex editor in HEX mode you would see how the structure of the header and or footer signature is derived in the entry for that file's extension in the foremost configuration file. With that knowledge, you can open a file with a "not so common" file extension in a hex editor to find the signature and add an entry in the configuration file for it.
Quote:
The same applies to disk defragmenters.
|
When files are written contiguous (when all the clusters/blocks are in line one after the other) they are not fragmented. Fragmented data usually does not happen with many file systems until all free space has been written too at least once and there is not much free space on the drive. Because files deleted over time were likely scattered all over the drive, when the system can't find enough contiguous clusters/blocks it writes the files in different areas creating a fragmented file.
De-fragmenting a drive rearranges the placement of many files to store them contiguously so they are not fragmented. This requires overwriting many "free space" clusters/blocks that contain recoverable data. De-fragmenting reduces future file recovery success.