Removed RAID5 array and now no boot
I'm having a great time with this little problem!
My home server is running Ubuntu 6.10 server and had the following disk configuration:
/dev/md0 = 3x500GB RAID 5, /storage
/dev/md1 = 2x20GB RAID 1, /
/dev/md2 = 2x20GB RAID 1, swap
I'm migrating to a VMWare ESX server and now that I have the file share aspect of it set up, I copied all my data off the /storage share and removed the 3x500GB drives.
However, now my system can't boot since mdadm now labels the 2x20GB drives as /dev/md0 and /dev/md1, respectively.
My system is using lilo as a bootloader. I've even gone so far as to plug the 3x500GB drives back in but all I seem to get is initramfs prompt. I did get it to boot normally once but then the 3x500GB drives were /dev/md0 again and I could not execute /sbin/lilo to boot to md0 in the future (which would be my RAID 1 array after I power down and remove the 500GB drives again) because the 3x500GB drives are currently RAID 5 and lilo won't apply the changes.
I know all I need to do is change lilo.conf to reflect the correct md0 to boot to, however I have no way of running /sbin/lilo to apply the changes. Any suggestions?
When you remove an array, it is tidier to dismantle it with mdadmin before unplugging things.
But, surely, all you need is the boot device specified as /dev/md0 in lilo.conf and an adjusted fstab?
Yeah, that's what it needed. Once I got a temporary CD-ROM installed in the server and booted to a rescue disk, it was a simple matter of editing lilo.conf to use /dev/md0, editing fstab, and then using chroot to run /sbin/lilo.
|All times are GMT -5. The time now is 11:32 AM.|