Suse corrupted raid - used e2fsck but can not mount /dev/md0
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Suse corrupted raid - used e2fsck but can not mount /dev/md0
I am using a SUSE file server with a 3 drive raid0 setup. The power went out and fsck does not let system boot up. I used e2fsck and final message was that the file system has been changed, and I have 25% uncontiguous blocks. Now I cant login except through root, and can not see the raid /dev/md0. I am not familiar with linux. I would appreciate any help.
in your case you need to look for [UUU] or [_UU] and the likes. they indicate which device has failed. in the example above [_U] would indicate that member /dev/sdc2 of /dev/md1 had failed. if you end up with [UUU] it's "just" a corrupted filesystem. fsck.ext2 should take care of that if recovery is possible at all. if the raid0 itself is degraded partial recovery (just chunks of data) might be possible, but that is up to experts to decide.
next time go for raid5. harddisk are cheap and recovery from a failed disk is possible.
Distribution: suse, opensuse, debian, others for testing
Posts: 307
Rep:
aha!
so your raid0 seems to be active. when you tried to mount /dev/md0 your system complained that the MOUNT-POINT did not exist. this makes me believe that an entry in /etc/fstab might not be correct as linux tried to mount /dev/md0 to a non existent place.
Have you tried to see if Knoppix will bring it up? Do you have anything in /dev/m*? Have you tried using mdadm assemble (NOT create!!!!!) to reassemble it. You might try that under Knoppix. This is an mdadm raid, right?
Added: rtspitz noticed no explicit mount point in:
Quote:
linux:~ # mount /dev/md0
mount: mount point raid does not exist
Oops, good eyes, rtspitz, I totally missed the significance of not giving an explicit mount point.
Last edited by Quakeboy02; 04-07-2007 at 01:12 PM.
and it mounted correctly, it seems that no information has been lost. I appreciate your help very much Rtspitz.
I figured out how to access /mnt, I shared it on the network, and now I have access over the network to all the files. Again, thank you very much for your help and your time!!
Also, thanks to Quakeboy2 for your suggestion.
Last edited by incredibles; 04-08-2007 at 05:49 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.