Hello,
I have a 2TB external ext2 formatted HD (Western Digital) that from one moment to the next failed to mount, right after backing up almost a TB of data from a server that itself had a half-crashed HDD that completely failed a few hours later.
I have no idea if the two events are related, but at the moment, I have one very dead HDD (the one from the server) and a data back-up on an external HDD that won't mount any more -- and this latter one I would very much like to restore. Here's the full story:
Code:
# mount -t ext2 /dev/sdh1 /backup_hdd
mount: wrong fs type, bad option, bad superblock on /dev/sdh1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
OK, so checking with e2fsck:
Code:
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 488378104 blocks
The physical size of the device is 488378000 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? y
Allright, checking for backup superblocks:
Code:
mke2fs -n /dev/sdh1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
122101760 inodes, 488378000 blocks
24418900 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
14905 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
And then using one after the other with e2fsck:
Code:
# e2fsck -b 32768 /dev/sdh1
e2fsck 1.42.9 (4-Feb-2014)
The filesystem size (according to the superblock) is 488378104 blocks
The physical size of the device is 488378000 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? y
I tried all backup superblocks, but that doesn't appear to work. When not aborting 2efsck, the initial checks are passed, but towards the end, I get the following:
Code:
free blocks count wrong for group #0 (32254, counted=0).
Fix? yes
free blocks count wrong for group #1 (32254, counted=0).
Fix? yes"
free blocks count wrong for group #2 (32254, counted=0).
Fix? yes"
...
This goes on until approximately group #15000 (sometimes with different block counts), followed by the message that the file system was modified, with summary information. (Yes, I held the Y key down for about five minutes straight.)
The main problem: the file system doesn't appear to be modified or fixed at all. Mounting it still fails, and the superblock is always reported as damaged, with 488378104 blocks instead of the correct number of 488378000 blocks.
Then I also tried to fix this issue using resize2fs, attempting to resize the file system to the correct number of blocks, or a somewhat lower number, but to no avail -- it almost appears that the HD won't allow changes to be made.
As a last resort, I got myself a 3TB HDD, formatted it to ext2 once again, and used dd to create a workable image file of the defective HDD on the new one.
Apart from the fact that mke2fs doesn't appear to work with image files, nothing changed. The superblock is still reported as wrong, and going through e2fsck with any backup superblock number takes a few hours to run, but doesn't result in a usable file system.
So as the very last resort, I'm wondering if anyone here has any useful suggestions. Have I overlooked anything? Or did I do something to aggravate the problem (if so, please tell me so that I can avoid it in the future)?
Yes, I checked this forum and several others for similar problems (and failing HD's are depressingly common), but either the things that worked for the people there didn't work for me, or no solution was found at all.
Any hints are welcome!
Best regards,
Richard Rasker