LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   How do you recover a file that was ftp'd to itself by accident? (https://www.linuxquestions.org/questions/linux-general-1/how-do-you-recover-a-file-that-was-ftpd-to-itself-by-accident-932866/)

edomingox 03-05-2012 02:22 PM

How do you recover a file that was ftp'd to itself by accident?
 
A file originating from a server was supposed to be ftp'd to another server but was accidentally ftp'd to itself and therefore not transferred. The result is now a 0 byte file. I'm thinking that the actual file is still there and maybe I can just find out how to modify the properties to show that. Is there a way to recover this?

zk1234 03-05-2012 03:00 PM

You can try Photorec (in spite of its name, it can recover many file formats). There are ready-to-use binaries at Photorec developer's website: http://www.cgsecurity.org/wiki/TestDisk.
Good luck !

edomingox 03-05-2012 03:03 PM

Both didn't work. the file is an ext4 with journaling.

zk1234 03-05-2012 03:13 PM

Quote:

Originally Posted by edomingox (Post 4619203)
Both didn't work. the file is an ext4 with journaling.

Try to run fsck.ext4. Perhaps it will mange to "repair" your corrupted file.

unSpawn 03-05-2012 03:17 PM

If you would have stopped the server or umounted the affected partition the instant the transfer happened then yes. If not then if the FTP directory resides in a partition that sees a lot of reads and writes each one of those diminishes your chance at recovery. Unless the file can be recreated or retrieved easily, unless you have backups (do you?) and unless partition size makes this operation prohibitive it's customary to make a 'dd' copy of the disk or partition to at least have a backup to work from. Then you would run 'testdisk' (with the "/debug /log" switches) on the loop-mounted partition, navigate to the directory the file is in and see if you can extract it that way. If that fails then Photorec, the Testdisk companion app, may be able to force recovery if the file matches a known file but only if has a distinct header and footer. In either case the caveats mentioned earlier apply.

unSpawn 03-05-2012 03:19 PM

Quote:

Originally Posted by zk1234 (Post 4619209)
Try to run fsck.ext4. Perhaps it will mange to "repair" your corrupted file.

I wonder, did you try your "advice" yourself on some file you deleted? If not, why would you offer such advice?.. BTW file deletion doesn't require arcane magick so how is it, in your experience, fsck can "perhaps repair" a file that is unlinked?..

zk1234 03-05-2012 03:46 PM

Quote:

Originally Posted by unSpawn (Post 4619217)
I wonder, did you try your "advice" yourself on some file you deleted? If not, why would you offer such advice?.. BTW file deletion doesn't require arcane magick so how is it, in your experience, fsck can "perhaps repair" a file that is unlinked?..

This file was not deleted but rather overwritten. I admit that I may be wrong with fsck.

zk1234 03-05-2012 03:50 PM

Quote:

Originally Posted by edomingox (Post 4619203)
Both didn't work. the file is an ext4 with journaling.

What do you mean:
- the binaries did not work on your system or
- they work, but were unable to recover anything ?

anomie 03-05-2012 11:03 PM

@edomingox: As mentioned, take a dd(1) image of the filesystem now if you want any possible chance of recovering the file.

The problem is you didn't just unlink a file, you actually overwrote it. Please be more clear about the problems you've had with recovery (operating on the image!), as others have asked.

cjc7913 03-06-2012 07:44 AM

I'm working with Eugene on this issue (actually, Eugene is working with me to help undo my oops). Here's what happened. I have two identical Linux servers. Each with two data partitions: /data11 and /data12:

Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 5952284 4632468 1012576 83% /
/dev/sda4 7702976 739172 6566196 11% /midas
/dev/sda1 497829 16316 455811 4% /boot
tmpfs 1813652 0 1813652 0% /dev/shm
/dev/mapper/icefs11 1999539124 48434336 1951104788 3% /data11
/dev/mapper/icefs12 1999539124 816779736 1182759388 41% /data12

There is a boot drive of 16GB, and two RAIDed disk arrays. The RAID configuration is 4x500GB solid state drives on each data partition, stripped for speed not backup.

I went to FTP a 280 GB file from one unit to another unit. The problem was, each unit has the same IP, same directory structure, same filename. I was after all, trying to create a mirrored backup. I had given the first unit a unique IP of .111 to FTP to the new unit's .110. However, there are two Ethernet ports, and the other on the original was still set to .110. Soooo, I basically FTP'd it to the same unit. FTP went to create the new file (with the same IP/dir/filename) and started at 0 bytes. Then said, ok, I'm done, transferred all 0 bytes of your file.

Now I'm left with a 0 GB file, and not the 280 GB. I believe the data is still all there, but the file system has simply flagged the data as available for re-use. I set the partition to read only and haven't written any data to the drive since the flaw.

The file has an pair (it was a raw binary file of a Tx and Rx pair). So, /data12/ has a 280GB file (not a backup, but its partner). A stat of each of the files shows:

|QT464-SR5-01| stat /data11/xmidas/ProjectE_280gb_tx_28Feb12.dat
File: `/data11/xmidas/ProjectE_280gb_tx_28Feb12.dat'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: fd00h/64768d Inode: 20484 Links: 1
Access: (0777/-rwxrwxrwx) Uid: ( 500/ xmidas) Gid: ( 100/ users)
Access: 2012-02-28 22:02:09.551103026 +0000
Modify: 2012-03-02 14:21:58.432850510 +0000
Change: 2012-03-02 14:21:58.432850510 +0000

|QT464-SR5-01| stat /data12/xmidas/ProjectE_280gb_rx_28Feb12.dat
File: `/data12/xmidas/ProjectE_280gb_rx_28Feb12.dat'
Size: 300647710720 Blocks: 587202624 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 49153 Links: 1
Access: (0777/-rwxrwxrwx) Uid: ( 500/ xmidas) Gid: ( 500/ xmidas)
Access: 2012-02-28 22:02:09.551103026 +0000
Modify: 2012-02-28 23:16:24.572836778 +0000
Change: 2012-03-05 12:27:43.078981444 +0000


You can see the bad 0 byte file has a Modify date of 3/2/02 which the second, good 280GB file does not. I'm thinking somehow with the inode, and the exact number of bytes afterward (Size: 300647710720, Blocks: 587202624, as per the second file) I can somehow copy the bytes to the second partition somehow. Or, change the file properties to say that the file is actually 280GB, not 0GB. I don't think there is much fragmentation, but I could be wrong.

Can anybody save me?

unSpawn 03-06-2012 06:48 PM

Thanks for the nfo. Apart from making a backup (or indicating it is impossible) there's a few more leads in this thread and I wonder why you nor Eugene have followed those up?.. It is important to know how much write ops the partition saw between deletion and remounting it read-only and a quick assessment with Testdisk could help too. And what items does your /data11 partition contain apart from the 280 GB file? OS files? FTP files or upload dirs? Log files? Were any of those written to or are they static?

cjc7913 03-06-2012 08:26 PM

We've tried TestDisk, PhotoRec, and extundelete, but haven't been successful with any of them. The best attempt would probably be extundelete because it uses the journaling, but it gives us an error when kicked off (perhaps an incompatibility with the RAID configuration? I believe the problems stem from there not really being a removed file. It's still there, with the same name, but with 0 bytes.

The disk is strictly used for data storage of large files. There is a boot drive of 16GB and two 2TB data partitions. I don't believe anything was written to the drive following the oops.

I was able today to recover some data today by taking the last inode that saved correctly before the bad file, added one block and 'dd' 280GB. The problem is sometimes it records some bytes of apparently some existing bytes that may have already been there.

I think what I really need is some type of software that can reconstruct the journal entries of which exact blocks where written to.

Ideas?

unSpawn 03-07-2012 05:23 PM

Quote:

Originally Posted by cjc7913 (Post 4620381)
The best attempt would probably be extundelete because it uses the journaling, but it gives us an error when kicked off

Its SourceForge page says: "The program is currently fairly fragile. If you run in to a problem that results in the program not working properly, please send a note to the mailing list, and it will likely be fixed in the next version".


Quote:

Originally Posted by cjc7913 (Post 4620381)
I was able today to recover some data today by taking the last inode that saved correctly before the bad file, added one block and 'dd' 280GB. The problem is sometimes it records some bytes of apparently some existing bytes that may have already been there.

That's because block allocation isn't just direct primary block allocation but indirect secondary and tertiary. Any investment of time and effort to (even partially) recover the files contents are without any guarantee the file can be recovered let alone be usable. Given the lack of a backup and requested information I suggest you cut your losses and move on.

anomie 03-07-2012 05:29 PM

Quote:

Originally Posted by unSpawn
Given the lack of a backup and requested information I suggest you cut your losses and move on.

Agreed. Cut your losses, plan/deploy a backup scheme today, and move on. (File recovery is never a sure thing. Expecting to recover from a live system, without immediately cloning and isolating the affected filesystem, is much less so.)


All times are GMT -5. The time now is 05:35 PM.