Server fails to start after softraid extension
I am just having my server fail to boot after extending my Linux Software RAID from 3 hds to 4 hds.
It doubles as internet access gateway, router, fileserver, printserver, firewall (i.e EVERYTHING ;-)
The array extension (RAID5) went well, amazing enough (ask about, if you want to know more [this is a pretty new feature for RAID5 that was not possible 1 year ago])
I updated the mdadm.conf after extending and now the array looks like this:
/dev/hda1 /dev/hdc1 /dev/hde1 /dev/hdg1
(hda1 is new)
After rebooting the server the headache started: Failure to recognize and mount /dev/md0.
The initrd tells me that there is no array found for /dev/md0, although it does recognize and mount the one for /dev/md1, which is my swap space (that I did not modify).
I am stuck in busybox, in the initramdisk of debian etch.
I can activate the RAID 5 array by hand with
mdadm --assemble /dev/md0 /dev/hda1 /dev/hdc1 /dev/hde1 /dev/hdg1
and even mount it to a directory.
(so the problem is probably tiny and is not in the array itself)
But I could not really understand how to get PIVOT_ROOT to work to switch over to the new ROOT so I can at least get my system running again and restore functionality.
The man page eludes me (much to short) and when trying it always says device or resource busy ??? :-(
Now the most burning questions at this point:
1. How the heck can the initrd mount the raid array when it takes the configuration information from the mdadm.conf that IS on the array?
2. How does the initrd start? is there a script that is started at the very beginning when the initrd is active? If I can run that again after mounting the array by hand, I might be able to get the system working again and understand what is going wrong..
I am glad for any pointers or insights as always. I never much dealt with the initrd before and now more than ever don't see the need for this complication.
thanks in advance