Here is a post found on another board which did the trick for me now, and still the raid starts after a few reboot tests, thanks to Milli :
"I found the same problem after an upgrade to linux-image-2.6.28-11-server (Jaunty, 9.04) when the /etc/mdadm/mdadm.conf file inside the initrd image had information that did not match the UUIDs of the real arrays, thus auto-start failed leaving me with /dev/md_d0, /dev/md_d1 and /dev/md_d2 instead of /dev/md0, /dev/md1, and dev/md2 as expected.
I then ran "mdadm --stop /dev/md_d0", then on md_d1, etc, to clear the bad assemble attempt (check /proc/mdstat to see), then ran "mdadm --auto-detect", mainly to just see what the issue was with auto-starting of the arrays, however it created them again but properly this time. I then let it finish the boot process at that point. All seemed fine. After the system was up, I then force-recreated the mdadm.conf file so the UUIDs matched... "/usr/share/mdadm/mkconf force-generate /etc/mdadm/mdadm.conf" (copy your mdadm.conf to /var/tmp or something first, if you want to diff it later). Then ran "update-initramfs -u" to re-build the initrd images. Then I rebooted.
Reboot went fine. All arrays recognized and auto-started properly. No leftover /dev/md_d0 and friends, so I have to assume that when the arrays auto-start properly, at some point they are renamed to match what's in /etc/mdadm/mdadm.conf."
|