Weird behaviour of mdadm/software RAID5
I have a setup like the following:
-3xSATA Seagate 400GB (same drives) on an ASUS M2NPV-VM
-/dev/sd[a|b|c] are members of the RAID5 /dev/md0
The status of /dev/md0 is clean, persistent sb etc.
Now I noticed that /dev/sda has no partition on it. /dev/sdb and sdc have an 'fd' autodetect partition on them. This is weird already.
So I go ahead and remove sda from the array, zero the superblocks and for good measure do a
sudo dd if=/dev/zero of=/dev/sda count=1M
Now I partition sda properly with sda1 as 'fd' autodetect. When I now
sudo mdadm /dev/md0 --add /dev/sda1
I only get
mdadm: add new device failed for /dev/sda1 as 3: Invalid argument
The funny thing is, when I --add /dev/sda instead, it works and starts rebuilding the array. Why do I have to add the whole disk, rather than the partition? This doesn't make sense to me.
Any hints or directions?
Make sure your /etc/mdadm.conf (or /etc/mdadm/mdadm.conf) contains the partition you are adding in the DEVICE statement.
I never use /etc/mdadm.conf
I set up a replacement disk with the following:
dd if=/dev/zero of=/dev/sda
sfdisk -d /dev/sdb | tee sdb.sfdisk
sfdisk /dev/sda < sdb.sfdisk
mdadm /dev/md0 --add /dev/sda1
That is if you are using partitions instead of the whole drive for the array...
Compare 'mdadm -E /dev/sdb' output to 'mdadm -E /dev/sdb1' and 'mdadm -E /dev/sda1'
|All times are GMT -5. The time now is 11:30 PM.|