Thank you for responses.
I don't think the hard drive is seriously damaged and I don't have TB of continuous space for full image.
Yes, I tried mounting them with:
sudo mount -t ext3 /dev/sdc1 /mnt/wd
and got: "wrong fs type, bad option, bad superblock..."
Yes, it doesn't make sense to do raid 1 on single drive, but that's what it appears to be. I have ssh access to same device at work, through I can't take it apart and connect my hard drive there.
mount
Code:
/dev/root on / type ext3 (rw,noatime,data=ordered)
proc on /proc type proc (rw)
sys on /sys type sysfs (rw)
/dev/pts on /dev/pts type devpts (rw)
securityfs on /sys/kernel/security type securityfs (rw)
/dev/md3 on /var type ext3 (rw,noatime,data=ordered)
/dev/md2 on /DataVolume type xfs (rw,noatime,uqnoenforce)
/dev/ram0 on /mnt/ram type tmpfs (rw)
/dev/md2 on /shares/bak type xfs (rw,noatime,uqnoenforce)
/dev/md2 on /shares/den type xfs (rw,noatime,uqnoenforce)
cat /proc/mdstat
Code:
Personalities : [linear] [raid0] [raid1]
md1 : active raid1 sda2[0]
256896 blocks [2/1] [U_]
md3 : active raid1 sda3[0]
987904 blocks [2/1] [U_]
md2 : active raid1 sda4[0]
973522880 blocks [2/1] [U_]
md0 : active raid1 sda1[0]
1959872 blocks [2/1] [U_]
unused devices: <none>
Is there something that I need to check there, that can help me to recover?
[EDIT]
The last 2 code-quotes are from a working hard drive of the same model that I have ssh access to at work. And not from my Ubuntu system at home where I connect my problematic hard drive.