We are recently asked to clone a mirrored drive and the downtime is really difficult or impossible to get. What we had in mind was to break a mirror of the RAID drive (we know this is really bad idea) but we did it anyway
The 2nd disk that was removed from the first server was booting fine on the second server and all that was needed there is to rebuild the drive. However, we decided that to at least keep one good copy of the data to not perform a rebuild on the 2nd server until the original server got a good mirror.
The first server is showing degraded from mdstat /dev/sdb as the drive that have failed. We inserted the new disk and rescanned the drive but it shows as /dev/sdc instead of the /dev/sdb. Still leaves us no choice of rebooting and fixing this, we have rebuilt added sdc on the array and the rebuild process has completed. Installed grub on it as well. We thought of doing this since at least we get the redundancy incase of immediate drive failure. The mdstat shows three drives now though. /dev/sda, /dev/sdb and /dev/sdc (/dev/sdb showing failed and /dev/sdc and /dev/sda as active)
What I am not sure is if we had the chance to reboot the server, will that new disk be detected as /dev/sdb upon next reboot of the server and can assemble the RAID automatcally? Or will it still appear as /dev/sdc and /dev/sdb should be gone? If it isn't so does anyone encountered this issue and how to normalize it? Thanks.