Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: RedHat from 4 -9, Fedora, Ubuntu, Centos 3 - 7, Puppy Linux, and lots of raspberry pi
Posts: 142
Rep:
RAID 1 issue - got broken after upgrade
Hi
I have tried to look up what I can on this and failed, but I had a working server with 2 RAID 1 disks, each with 3 active partitions (not including swap).
After an upgrade, the server (CentOS 6.3) fails to boot with a kernel panic. Using a rescue disk I can see all partitions but only md1 is actually there, md0 and md2 are not although each of the individual partitions exist uncorrupted.
I've tried using mdadm to reassemle but I must be doing something wrong because I cannot make it work.
Simply, how can I get sda1 and sdb1 to be md0 and sda3 and sdb3 to be md2?
I should add that I have checked mdadm.conf and it looks fine (ie it specifies the devices and partitions correctly using the UUIDs)
Scratching my head has resulted in more hair loss!
Distribution: RedHat from 4 -9, Fedora, Ubuntu, Centos 3 - 7, Puppy Linux, and lots of raspberry pi
Posts: 142
Original Poster
Rep:
UPDATE:
Using Centos recovery disk, it fails to pick up md0 or md2. If I do:
mdadm -a /dev/md0 /dev/sd[ab]1
and repeat for md2, I can do a cat /proc/mdstat and it shows all are recognised and synced.
Next thought was to do a scan and write to mdadm.conf - so did
mdadm --detail --scan >> /mnt/sysimage/etc/mdadm.conf (having mounted root in sysimage)
All looks good, then reboot and it doesn't even get to the splash screen which it did before!!
So, I am now presuming I've either a) messed up mdadm.conf, b) messed up grub or c) both!
Mdadm.conf purely has the refs to the three raid devices - nothing else, just those three lines.
Tried to chroot to /mnt/sysimage but it won't because /home is encrypted. So, manually mount / and /boot, then chroot and try doing a grub-install. It won't. Tried /dev/md /dev/sda /dev/sdb (individually - all fail)
Don't know what to do next. Seem to make some progress and then get stuck again. What I do know is the disks and data are intact, which has to be good news.
Distribution: RedHat from 4 -9, Fedora, Ubuntu, Centos 3 - 7, Puppy Linux, and lots of raspberry pi
Posts: 142
Original Poster
Rep:
Probably about as far as I can go. I do not want to lose everything I've done (ie building a mail/caching web server) but I can't seem to get this fixed.
In a last ditch attempt before I rebuild from scratch, does anyone have any ideas how I can:
a) using the Centos recovery disk, make the RAID arrays permanent so they are seen on reboot
b) repair grub?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.