Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Hi, I have a three disk RAID 5 array on SuSE 10.1:
/dev/sda4 /dev/sdb4 /dev/sdc4 -> /dev/md3
mdadm won't assemble the whole thing because:
"superblock on /dev/sdc4 doesn't match others - assembly aborted"
Assembling just /dev/sda4 and /dev/sdb4 works, except:
"raid array is not clean -- starting background reconstruction"
"cannot start dirty degraded array for md3"
"failed to run raid set md3"
"pers->run() failed ..."
"failed to RUN_ARRAY /dev/md3: Input/output error"
fsck.reiser says that I need to rebuild my superblock and when I try that, it asks me what version of Reiser I'm using (3.6.x) and asks for my block size, to which I answer 4096. After that, the utility quits without anymore messages. dmesg and /var/log/messages don't contain anything useful either.
I'm not too eager to save my array. I've been thinking about disassembling it for some time. If someone could please, please, please just give me a little insight as to how to get access to my data for one last time, I would be enternally grateful. I'd also like to know if anyone has any idea why fsck would quit without giving me a message.
[UU_] would indicate that the third member of /dev/md0 had failed and needs to be replaced manually for the rebuild to start. what is strange though is that your raid5 won't come up as it should survive one failed disk.... hmmm... take a look at /proc/mdstat !
Thanks for the info Spitz, but I started mucking around with it on my own before I saw your reply. One thing to add to what you said is that the drive didn't fail (at least not completely yet). That same drive has three other partitions that are part of three other RAID 5 arrays, all of which were unaffected.
Okay, maybe this will help somebody else. I looked at my last backup and it wasn't that long ago, so I went ahead and tried to fix the problem without really understanding what I was doing. There's no better way to learn than to experiment, right?
Anyway, I recreated the RAID using mdadm and that seemed to get rid of the mdadm error messages I was getting before. Reiserfsck still complained about the missing superblock, but the --superblock-rb switch worked this time. I ran into some problems with the journal options, and I ended up rebuilding the journal as well. After that, reiserfsck reported that I needed to rebuild the tree. The first time I ran it, it worked for about an hour and then stopped (no drive access for 15-20 minutes). So I killed it and tried again, this time it did work (took close to two hours to complete - maybe it was working the first time and I killed it prematurely).
The tree rebuilding showed TONS of errors, so I knew my data was probably toast. But afterward, I could mount the drive! And some of my files were intact. A large number were moved (and generically renamed) to the lost+found directory. But the total size of the recovered data looks to be about half of the original size.
Anyway, I'm sure there was a better way to handle this, but I needed my computer back and like I said, I did have a somewhat recent backup. Let this be a warning to others - RAID 5 is NOT a suitable replacement for a regular backup.