LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Suse corrupted raid - used e2fsck but can not mount /dev/md0 (https://www.linuxquestions.org/questions/linux-newbie-8/suse-corrupted-raid-used-e2fsck-but-can-not-mount-dev-md0-543548/)

incredibles 04-05-2007 08:36 AM

Suse corrupted raid - used e2fsck but can not mount /dev/md0
 
I am using a SUSE file server with a 3 drive raid0 setup. The power went out and fsck does not let system boot up. I used e2fsck and final message was that the file system has been changed, and I have 25% uncontiguous blocks. Now I cant login except through root, and can not see the raid /dev/md0. I am not familiar with linux. I would appreciate any help.

rtspitz 04-05-2007 02:18 PM

have a look at this:

Code:

cat /proc/mdstat
that file contains info on software raid devices and will tell you + us if a member of your raid0 (disk/partition) has failed.

an example output can look like this:

Code:

cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc2[1] sdb2[0]
      78011520 blocks [2/2] [UU]

md0 : active raid1 sdc1[1] sdb1[0]
      136448 blocks [2/2] [UU]

unused devices: <none>

in your case you need to look for [UUU] or [_UU] and the likes. they indicate which device has failed. in the example above [_U] would indicate that member /dev/sdc2 of /dev/md1 had failed. if you end up with [UUU] it's "just" a corrupted filesystem. fsck.ext2 should take care of that if recovery is possible at all. if the raid0 itself is degraded partial recovery (just chunks of data) might be possible, but that is up to experts to decide.

next time go for raid5. harddisk are cheap and recovery from a failed disk is possible.

incredibles 04-07-2007 11:40 AM

Thanks a lot for your help. I ran the code and this is the output:

linux:~ # cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sda1[0] sdc1[2] sdb1[1]
732563712 blocks 32k chunks

unused devices: <none>
linux:~ #

I tried to mount md0 but this is the output:
linux:~ # mount /dev/md0
mount: mount point raid does not exist

I have a lot of information that I dont want to lose, is there anyway to recover it based on this output? I will go to raid5 once I recover...

Again thanks so much for your help.

rtspitz 04-07-2007 01:06 PM

aha!

so your raid0 seems to be active. when you tried to mount /dev/md0 your system complained that the MOUNT-POINT did not exist. this makes me believe that an entry in /etc/fstab might not be correct as linux tried to mount /dev/md0 to a non existent place.

what happens if you try:

Code:

mount /dev/md0 /mnt

Quakeboy02 04-07-2007 01:10 PM

Have you tried to see if Knoppix will bring it up? Do you have anything in /dev/m*? Have you tried using mdadm assemble (NOT create!!!!!) to reassemble it. You might try that under Knoppix. This is an mdadm raid, right?

Added: rtspitz noticed no explicit mount point in:
Quote:

linux:~ # mount /dev/md0
mount: mount point raid does not exist
Oops, good eyes, rtspitz, I totally missed the significance of not giving an explicit mount point.

incredibles 04-08-2007 05:44 PM

Success! I ran:

mount /dev/md0 /mnt

and it mounted correctly, it seems that no information has been lost. I appreciate your help very much Rtspitz.

I figured out how to access /mnt, I shared it on the network, and now I have access over the network to all the files. Again, thank you very much for your help and your time!!

Also, thanks to Quakeboy2 for your suggestion.


All times are GMT -5. The time now is 08:11 PM.