Quote:
Originally Posted by Fred Caro
..--add the reintroduced drive using mdadm and grub needs to be installed on (assuming only 2) both /dev/sda's.
|
During the CentOS 6.6 install, I setup the partitions using "custom" and made the 3 partitions that way (create raid partitions, then a raid device. repeat). What's the issue w/my partitioning scheme?
When asked where to put the boot loader, I chose /dev/md0 over either /dev/sda or sdb because I wanted resiliency if one drive died. My tests booting with one drive disconnected (tried both) worked flawlessly except for the issue in the OP (if it even is one).
Code:
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[2] sdb1[3]
205760 blocks super 1.0 [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[2]
8389632 blocks super 1.1 [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[2]
102401024 blocks super 1.1 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none>
$ mdadm --version
mdadm - v3.3 - 3rd September 2013
$
More bg:
My initial attempt was using
IRST in the BIOS. THAT worked good, and so did the "disconnect-a-drive" test. EVEN better than
md, after adding the "disconnected" disk back to the array in the BIOS, once Linux booted, the md arrays ALL were automagically rebuilt, without *any* intervention on my part.
Why then the switch to md? After the rebuild of the array, the system wouldn't boot, so I dropped it (to be fair, the BIOS was set to use *only* UEFI, and my "incomplete understanding" of that could've affected the bad outcome (I also used CentOS 7 that time.) I switched the BIOS back to "traditional" legacy "Auto" CSM + UEFI mode, and installed from the 6.6 boot CD using its' "Legacy" bootlader and not the UEFI one.) Besides, consensus is that IRST sucks vs 100% Linux md.
Back to the topic at hand:
An ancillary issue appeared. Altering the "disconnect" test, I inserted a brand new (identical) sda drive instead just reconnecting the original.
- sfdisk -d /dev/sdb | sfdisk /dev/sda
- mdadm -add
That's it. Array rebuilt, reboots fine. But booting off the new disk "alone" failed. "grub-install /dev/md0" fixed that but subsequently removing sda gave "Hard Disk Error" upon boot from sdb (the orig disk). i.e., the system now *only* boots from the "new" disk, despite both disks being cleanly part of the array.
The solution was to go into the grub cli and "setup" on each of the drives (which I think is diff from using just md0) because then, everything worked again, no matter which drive I left in the machine solo.
(I suspect it has something to do with the 3rd UUID of the "new" disk added in to "test")
SO there are really two (2) issues but I wanted to start with only the first one detailed in my OP and try to tackle that first.