At the time I post this, your post has been viewed 88 times but there have been no replies.
I believe this is because your post is quite confusing: it seems you haven't read the RAID How-Tos, and it also seems you've left out some information.
For instance:
Quote:
Originally Posted by infernalcucumber
First, I have cleaned the sdb:
Code:
dd if=/dev/zero of=/dev/sdb bs=8M count=1000
|
Now, why would you do that? Surely, a new virtual disk is by definition empty?
Continuing on:
Quote:
Originally Posted by infernalcucumber
and copied the sda's partitions to sdb:
Code:
sfdisk -d /dev/sda | sfdisk /dev/sdb
|
You mean you copied the
partition table. It seems this procedure would also transfer the label ID of the source disk onto the destination disk, which is a bad idea if you want to have both disks connected to the same system.
Quote:
Originally Posted by infernalcucumber
Next, I've changed the type of partitions to "Linux raid autodetect":
|
Quote:
Originally Posted by infernalcucumber
For each partition (md1,md2,md5,md6,md7), I did next:
Code:
mdadm --create /dev/md1 --level=1 --metadata=0.90 --raid-disk=2 missing /dev/sdb1
|
The fact that you're creating degraded mirror sets indicates that you want to copy existing data from the various partitions on
/dev/sda and then add these partitions to the RAID sets afterwards, but why are you using metadata v0.90 for all the
md devices?
Quote:
Originally Posted by infernalcucumber
saved the raid configuration:
Code:
mdadm --examine --scan >> /etc/mdadm.conf
Next, /etc/fstab and /etc/mtab were edited by replacing sdaX->mdX
|
I'd just like to point out that this will only work if
/etc/mdadm.conf is present in the initrd image.
The initrd boot script will indeed activate the RAID sets, but unless
/etc/mdadm.conf is present, the RAID devices will be dynamically named using a particular numbering scheme, starting from
/dev/md127 and counting downwards. This will obviously not match
/etc/fstab on your root partition, so when the time comes for the startup script to remount root as read/write, it won't find root at all and you'll be unceremoniously dumped at a recovery console.
Consider using UUIDs or labels instead.
Also, you left out the part where you (presumably) transferred files from
/dev/sdaX to the various
md devices.
Quote:
Originally Posted by infernalcucumber
and the initial ramdisk were prepared:
Code:
mkinitrd -c -k 4.4.14-smp -f reiserfs -r /dev/md2 -m reiserfs:dm-raid -u -o /boot/initrd_raid.gz
|
There are two issues with that command:
1. From
man mkinitrd:
Code:
-R This option adds RAID support to the initrd, if a static mdadm binary is available on the system.
2.
dm-raid.ko has nothing to do with
mdadm support. "dm" stands for "device mapper", aka LVM. The module you're looking for is called
raid1.
I don't know how grub handles
mdadm RAID sets, but it seems you've added more modules to grub.cfg than needed.