How Can I Recover My Software RAID-1 Array?
I've got a custom machine with a ASRock AMD motherboard because I needed both PS/2 connections for my KVM switch.
I combined the disk drives from 2 machines to this one box so what I have is a total of 6 hard drives:
4 - 1 TB SATA drives
2 - 1.5 TB SATA drives.
Because I did a complete installation, I implemented the RAID-1 and LVM during the installation. Initially the drives were configured as:
/dev/sda | /dev/sdc - 2 - 1 TB drives with a total of 7 RAID-1
/dev/md0 = Standard partition for /boot
/dev/md1 - /dev/md6 = LVM partitions for the
remainder of the OS plus other stuff.
/dev/sdb | /dev/sdd - 2 - 1 TB drives with a total of 2 RAID-1 partitions, both of which are LVM. /dev/md7 and /dev/md8
/dev/hda | /dev/hdb - 2 - 1.5 TB drives with a total of 2 RAID-1 partitions, both of which are LVM. /dev/md9 and /dev/md10
At some point while upgrading from CentOS 5.1 x86_64 to CentOS 5.9 (with stops at 5.4 and 5.6) the drive designations changed and what was /dev/sdc became /dev/sdb and what was /dev/sdb became /dev/sdc. I did not pay much attention to these changes because everything was running fine.
Then drive /dev/sdb failed and /dev/sdc moved down to /dev/sdb and what was /dev/sdd became /dev/sdc.
My questions are:
1. How can I dentify the failed drive (I've got nmon, webmin installed)?
2. After installing the new drive, how can I recreate a duplicate
partition structure of /dev/sda?
3. Where can I find some documentation on #1 and #2?
4. One of the 1.5 TB drives is showing 213 errors, is there a number
after which the drive is bad?