How do you recover a file that was ftp'd to itself by accident?
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How do you recover a file that was ftp'd to itself by accident?
A file originating from a server was supposed to be ftp'd to another server but was accidentally ftp'd to itself and therefore not transferred. The result is now a 0 byte file. I'm thinking that the actual file is still there and maybe I can just find out how to modify the properties to show that. Is there a way to recover this?
You can try Photorec (in spite of its name, it can recover many file formats). There are ready-to-use binaries at Photorec developer's website: http://www.cgsecurity.org/wiki/TestDisk.
Good luck !
If you would have stopped the server or umounted the affected partition the instant the transfer happened then yes. If not then if the FTP directory resides in a partition that sees a lot of reads and writes each one of those diminishes your chance at recovery. Unless the file can be recreated or retrieved easily, unless you have backups (do you?) and unless partition size makes this operation prohibitive it's customary to make a 'dd' copy of the disk or partition to at least have a backup to work from. Then you would run 'testdisk' (with the "/debug /log" switches) on the loop-mounted partition, navigate to the directory the file is in and see if you can extract it that way. If that fails then Photorec, the Testdisk companion app, may be able to force recovery if the file matches a known file but only if has a distinct header and footer. In either case the caveats mentioned earlier apply.
Try to run fsck.ext4. Perhaps it will mange to "repair" your corrupted file.
I wonder, did you try your "advice" yourself on some file you deleted? If not, why would you offer such advice?.. BTW file deletion doesn't require arcane magick so how is it, in your experience, fsck can "perhaps repair" a file that is unlinked?..
I wonder, did you try your "advice" yourself on some file you deleted? If not, why would you offer such advice?.. BTW file deletion doesn't require arcane magick so how is it, in your experience, fsck can "perhaps repair" a file that is unlinked?..
This file was not deleted but rather overwritten. I admit that I may be wrong with fsck.
@edomingox: As mentioned, take a dd(1) image of the filesystem now if you want any possible chance of recovering the file.
The problem is you didn't just unlink a file, you actually overwrote it. Please be more clear about the problems you've had with recovery (operating on the image!), as others have asked.
I'm working with Eugene on this issue (actually, Eugene is working with me to help undo my oops). Here's what happened. I have two identical Linux servers. Each with two data partitions: /data11 and /data12:
There is a boot drive of 16GB, and two RAIDed disk arrays. The RAID configuration is 4x500GB solid state drives on each data partition, stripped for speed not backup.
I went to FTP a 280 GB file from one unit to another unit. The problem was, each unit has the same IP, same directory structure, same filename. I was after all, trying to create a mirrored backup. I had given the first unit a unique IP of .111 to FTP to the new unit's .110. However, there are two Ethernet ports, and the other on the original was still set to .110. Soooo, I basically FTP'd it to the same unit. FTP went to create the new file (with the same IP/dir/filename) and started at 0 bytes. Then said, ok, I'm done, transferred all 0 bytes of your file.
Now I'm left with a 0 GB file, and not the 280 GB. I believe the data is still all there, but the file system has simply flagged the data as available for re-use. I set the partition to read only and haven't written any data to the drive since the flaw.
The file has an pair (it was a raw binary file of a Tx and Rx pair). So, /data12/ has a 280GB file (not a backup, but its partner). A stat of each of the files shows:
You can see the bad 0 byte file has a Modify date of 3/2/02 which the second, good 280GB file does not. I'm thinking somehow with the inode, and the exact number of bytes afterward (Size: 300647710720, Blocks: 587202624, as per the second file) I can somehow copy the bytes to the second partition somehow. Or, change the file properties to say that the file is actually 280GB, not 0GB. I don't think there is much fragmentation, but I could be wrong.
Thanks for the nfo. Apart from making a backup (or indicating it is impossible) there's a few more leads in this thread and I wonder why you nor Eugene have followed those up?.. It is important to know how much write ops the partition saw between deletion and remounting it read-only and a quick assessment with Testdisk could help too. And what items does your /data11 partition contain apart from the 280 GB file? OS files? FTP files or upload dirs? Log files? Were any of those written to or are they static?
Last edited by unSpawn; 03-06-2012 at 06:50 PM.
Reason: //Typo
We've tried TestDisk, PhotoRec, and extundelete, but haven't been successful with any of them. The best attempt would probably be extundelete because it uses the journaling, but it gives us an error when kicked off (perhaps an incompatibility with the RAID configuration? I believe the problems stem from there not really being a removed file. It's still there, with the same name, but with 0 bytes.
The disk is strictly used for data storage of large files. There is a boot drive of 16GB and two 2TB data partitions. I don't believe anything was written to the drive following the oops.
I was able today to recover some data today by taking the last inode that saved correctly before the bad file, added one block and 'dd' 280GB. The problem is sometimes it records some bytes of apparently some existing bytes that may have already been there.
I think what I really need is some type of software that can reconstruct the journal entries of which exact blocks where written to.
The best attempt would probably be extundelete because it uses the journaling, but it gives us an error when kicked off
Its SourceForge page says: "The program is currently fairly fragile. If you run in to a problem that results in the program not working properly, please send a note to the mailing list, and it will likely be fixed in the next version".
Quote:
Originally Posted by cjc7913
I was able today to recover some data today by taking the last inode that saved correctly before the bad file, added one block and 'dd' 280GB. The problem is sometimes it records some bytes of apparently some existing bytes that may have already been there.
That's because block allocation isn't just direct primary block allocation but indirect secondary and tertiary. Any investment of time and effort to (even partially) recover the files contents are without any guarantee the file can be recovered let alone be usable. Given the lack of a backup and requested information I suggest you cut your losses and move on.
Given the lack of a backup and requested information I suggest you cut your losses and move on.
Agreed. Cut your losses, plan/deploy a backup scheme today, and move on. (File recovery is never a sure thing. Expecting to recover from a live system, without immediately cloning and isolating the affected filesystem, is much less so.)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.