I'm running Debian Wheezy 64-bit. It was running perfectly with a 2-disk RAID1 array. Booting up, rebooting, shutting down, booting up. No problem.
I decided I wanted to add some disks, to create a second RAID array. Oh my
For some reason, my setup REALLY seems to dislike partitions being changed. I think I know what the issue is centred around, and am after some advice as to the two possible fixes ...
When the boot fails, there's a message about a job running ... it seems to be waiting for a disk to be available, and after 90s, it drops to emergency mode.
Running journalctl -xb in emergency mode, I saw entries that stated the job dev-md0 had failed. This then led to the RAID array md0 not being available, and a whole list of dependency failures.
The same then happened for RAID1.
At this point, RAID0 was created with a missing drive, and RAID1 was fully assembled and clean.
HOWEVER, blkid shows the disks as being available.
I tried an update-initramfs, but the problem was still there.
I tried CONFIG_FHANDLES=Y on boot. Still no joy.
I tried renaming mdadm.conf mdadm.old. Still no joy.
Finally, I commented out /dev/md0 and /dev/md1 in etc/fstab and bingo !!! Perfect boot.
I then used mdadm --assemble --scan which immediately found and built the RAID arrays.
I then used Webmin to (re)create the mounts that were in /etc/fstab.
So my question(s) are derived from this thought:
Clearly there is something in my RAID config which refuses to accept that the disks which comprise the RAID are present. I believe this has NOTHING TO DO WITH THE RAID BEING ASSEMBLED WITH A MISSING DISK. The reason I state that is that the boot log showed errors about my RAID1 array - which had both disks present and clean.
1) What is it in the config and how can I change it to stop this in the first place ?
2) If I can't stop it, what can I do to tell the system to get past the error and boot ?
cheers guys