I'M HOPING OTHERS WILL REVIEW THIS AND ADD COMMENTS AND ESPECIALLY CORRECTIONS. There is risk in doing this!
So, the first thing you need to know is that depending on what level of LVM2 you are on, you can sometimes run into bugs. I have, too many times, but these bugs have manifested themselves on very large volume groups. Like a 3.1 terabyte volume group comprised of 124 25GB SAN LUNs. (LVM1 is very old now and I wouldn't attempt to do anything with it!)
And because you can sometimes run into difficulty, either bugs or procedural errors in what you are doing, you really need to have a good backup of your data and be prepared to reload Linux from scratch. THIS IS REALLY IMPORTANT! Plan for failure.
That out of the way, another thing you need to know is that Linux Software RAID (aka MD, or the Multiple Device driver) writes its "superblock" at the end of the devices comprising the RAID array (it usually uses the last 128K of each device, that's why a /dev/mdX device is slightly smaller than the /dev/hdX or /dev/sdX devices that it is made of). LVM writes (by default) its metadata at the beginning of each physical volume that you add to LVM. That's why you need to create your MD device first then add it to LVM.
In your case if /dev/hdb1 is unused, you can remove it from LVM;
Code:
vgreduce <vgname> /dev/hdb1
pvremove /dev/hdb1
partition /hdb the same as /hda
either use 'fdisk /dev/hdb' and make it look like hda, or try
Code:
sfdisk -d /dev/hda > table
sfdisk /dev/hdb < table
use 'fdisk' to change the partition type of /dev/hdb2 to 'fd' (Linux raid autodetect)
create a "one device" RAID1 array out of it, /dev/md0;
Code:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/hdb2 missing
echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
add /dev/md0 to LVM;
Code:
pvcreate /dev/md0
vgextend <vgname> /dev/md0
move the LVM data off of /dev/hda2 onto /dev/md0;
Code:
pvmove -i15 -v /dev/hda2 /dev/md0
remove /dev/hda2 from LVM;
Code:
vgreduce <vgname> /dev/hda2
pvremove /dev/hda2
add /dev/hda2 to the RAID1 array, /dev/md0;
Code:
mdadm --manage /dev/md0 --add /dev/sda2
update the /dev/md0 entry in /etc/mdadm.conf to reflect the newly added device. If it has a UUID= parameter instead of explicit devices you are already set and no change is needed.
watch it sync the two devices;
use 'fdisk' to change the partition type of /dev/hda2 to 'fd' (Linux raid autodetect)
Simple.
Then you'll want to make the hdb drive bootable on its own.
format hdb1
'mkdir /boot2'
'mount /dev/hdb1 /boot2'
'rsync -av /boot /boot2'
and write the bootloader code to /dev/hdb
You should be able to do this, or something very similar, to turn your undelying partitions into a RAID1 array.
There are variation you can do on this. You don't have to make the second drive bootable. You could take your third drive, partition it with two 10GB partitions; pair up /dev/hdc1 with the "hda partition" to make an "md0 array"; and pair up /dev/hdc2 with the "hdb partition" to make an "md1 array". (I'm not fond of this, but it'd work.) And there are other permutations.
It would be preferable if you got a drive to match your third drive and create another raid array out of it (/dev/md1), and then add it to the same LVM Volume Group or make a new volume group out of it.
Take the time to understand the steps you are doing and make sure they make sense before you do them.
And if worse comes to worst, you have your backups and can start from scratch.
Good luck.