I used the command generated by mkinitrd_command_generator.sh
, except I changed the root device parameter to -r LABEL=root
. I then added the dmraid executable to /boot/mkinitrd-tree/sbin
, copied all dmraid-related libraries to /boot/mkinitrd-tree/lib64
, and added the following commands to the init script, right below the part that does LVM initialization:
if [ -x /sbin/dmraid ]; then
/sbin/udevadm settle --timeout=10
I then ran mkinitrd again with no parameters. This generated a working /boot/initrd.gz
with dmraid support.
While troubleshooting this issue, I added a few debug statements to the init script that would print the output of findfs LABEL=root
at various points in the script. I was able to determine the following:
- When init starts, findfs returns /dev/sda1 and /dev/sda2 for LABEL=root and LABEL=boot respectively. This is as expected, as each component of a fakeRAID mirror set contains a regular partition with a valid file system and RAID metadata at the end.
- Once the dmraid command has been run, findfs returns /dev/dm-1 and /dev/dm-2, as it should.
- The init script successfully identifies /dev/dm-1 as the root FS, and is able to mount it (adding a mount command et the very end of the init script confirmed this).
And now for the weird part: When initrd exits and the boot process continues, findfs
immediately starts returning /dev/sda1
again. There's no way the kernel could have probed /dev/sda1
again as it is mounted and locked by device-mapper, so this must be stale information from earlier in the boot process. At this point, mount
also insists that /dev/sda1
is mounted as /
. And no, there's no stale information in /etc/mtab
, so where is this information coming from?
Interestingly, running partprobe
will fix the invalid /dev/sda2
reference, since /dev/dm-2
(a small, mirrored boot volume) never gets mounted. The /dev/sda1
reference stays though, as /dev/dm-1
is mounted and cannot be probed.