disaster plan: raid + lvm
I want to create some notes or whatever on what would need to be done incase of a single drive failure for my backup server. It has two separate raid 5 array over 8 discs, and one raid 1 array over 2 discs. Then I created an lvm over all of those so they are at a single mount point.
Does anyone have any links that would be helpfull. Or posting youre notes would be good too. I pretty much already know what to do for the raid part, thats simply reformatting the new disc to be the specific size of the other discs in the array and then adding it to the array. Its mostly the lvm stuff that I get lost on. I just used the rhel installer to create the lvm, so I'm not sure what all the steps are to do it from the command line.
On the RAID 5 arrays, it sounds like you have some hot spares installed. The spare should be setup and switched in if a problem is detected on one of the drives.
However, you haven't supplied much in details on them. For example--are these raid-5 arrays linux-raid, or hardware controlled? Are they hot-swappable scsi drives?
Hot swappable drives can be repaired simply by replacing the bad disk.
Ok, I'll explain a little more. Its kind of a confusing setup, but here goes.
There are 8 drives total:
4 320gb ide hd's, these make up the raid 5 array md0
2 300gb ide hd's
2 400gb sata hd's
on the 400gb drives there are 300gb partitions, those combined with the two 300gb ide drives make up the raid 5 array md1
on the 400gb sata drives there is an 80gb partition on each, these make up the raid 1 array md2
on the 400gb sata drives there are ext2 partions that take up the extra 20gb, this is where /, /boot, /usr, /tmp, swap are located.
All of the raid partitions are Linux raid autodetect, so software raid. Then over the three raid arrays I created an lvm so they are all at one mountpoint.
|All times are GMT -5. The time now is 12:59 PM.|