Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Distribution: suse, opensuse, debian, others for testing
Originally Posted by hazmatt20
That German thread is exactly what mine is doing. If you don't mind translating the solution, I'd be grateful. I tried
translation of the last part with the "solution":
at the time I created the raid I must have made a mistake, which showed up right now.
apparently I had created persistent superblocks on the devices (/dev/sd[a-e]) as well as
on the partitions (/dev/sd[a-e]1).
after zero-ing the superblocks with "mdadm --zero-superblock /dev/sd[-e]" and rebooting,
the partitions showed up in /proc/partitions again and the raid was operational and could
be mounted without any errors.
this night was not subject to entertainment tax. (:
# mount /dev/md0 md0
mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
# fsck /dev/md0
fsck 1.39 (29-May-2006)
e2fsck 1.39 (29-May-2006)
Group descriptors look bad... trying backup blocks...
fsck.ext3: Bad magic number in super-block while trying to open /dev/md0
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
Is there anything else we can try, or is it game over?
You've kind of lost me here. I was under the impression that the create option created a new array and threw away anything that previously existed. As far as I was understand, the data was gone when you ran create.
Well, testdisk didn't show any partitions under advanced, so I'm running analyse. It's going to take a good while, but I'm going to start making plans to start reloading data. I'll post an update when it finishes.
Alright, well, it analyse didn't detect stuff correctly and just gave a bunch of garbage, so I'm pretty positive it's gone. So many DVDs to reload! Oh, well. Thanks for your help.
One last thing, what precautions should I take in the future to increase my chances of recovery? I know now to run dist-upgrade install of installing from disk, but other than that and backing up my mdadm.conf, what should I do?
I've been thinking about this today, and I wonder if the problem could have been avoided if you hadn't had your drives connected when you installed mdadm. I mentioned that I installed mdadm once and it created a bunch of junk on the drives I had connected. It may or may not be a problem, but it's something to think about if you have to reinstall for any reason. You might also think about creating a backup of your non-data files. There are a number of good backup systems out there. I just used tar along with a trivial script I wrote. I actually did a restore (from Knoppix) of the boot/non-data image I keep on my data disks recently, and it worked just fine.
Well, I've almost got everything working, but I've got a few snags. Two parts.
First, I want the 6 400GB drives to start as md0 and the 3 500GB drives to start as md1. When I reboot, md0 starts with 2 of the 3 500GB drives and resyncs with the third while md1 starts with 4 of the 6 400GB drives. mdadm.conf is currently