Setting up RAID 1 on exisiting server...
I have an existing Fedora 5 server with twin SATA 250 drives. Right now, the LVM shows them configured as a single large volume.
I want to enable RAID 1 on these drives. The setup for mdadm looks very easy.
However, if I perform the mdadm setup and activate the RAID device, will it be a problem that LVM is wrapped around both disks?
Also, the Volume filesystem is EXT3.
- Do I HAVE to change this in order to use mdadm RAID?
- If so, will changing it kill my data?
Appreciate any help. Thanks.,
you'd need to rebuild your system compeltely really. you'd need to totally reformat both drives as a single device with a different partition type. once done you would then partition the resulting /dev/md0, making it LVM, or seperate native ext3 partitions.
It could be done using just the two drives, but it would be a very complex process.
As acid_kewpie pointed out, the easiest and most straightforward option would be to reinstall.
The fastest option for retaining your current configurations and files would be to backup the system to another drive/server, repartition the drives, create the raid(s), set up LVM, restore the installation, edit grub.conf/fstab to reflect the new setup and then remake the initrd images (which could be done before you start the conversion).
Itís straightforward to do, but by no means trivial. And before anyone asks, no one is going to post a step-by-step protocol for doing it.
No matter how you decide to proceed, the first step should always be making a full backup.
yeah i guess you technically could shrink the LVM PV off of one disk, recreate that single disk as a pre-degraded array, format it and copy the other drive across, then recreate as a full raid array.. pretty horrible though, and i'd imagine unliekly to go smoothly at all.
It gets really nasty. Been there, done that, wonít do it again.
A couple of weeks ago, I needed to convert a 2-drive mdadm-based raid1/lvm system to a 3-drive raid1/raid5/lvm setup. The idea was to degrade the original raids, create a degraded raid5 using two drives, create a new lvm physical volume on the new raid5, add it to the existing volume group, pvmove the physical extents off the degraded raid1, stop the original raid1 and recover the new raids using the newly available drive.
Turned out that pvmove would not run on degraded raids and, of course, I didnít learn that until I had two sets of degraded raids running. But, not a lot to get worried about, since a good backup was sitting on another server if things went south.
Then the fun began. Renamed the old volume groups on the degraded raid1 so that the new ones could be created with the correct names, copied everything over from the original raid1 to the new setup, stopped the original raids, recovered the new raids and fired up mkinitrd.
But donít be fooled by how simple that sounds. It seemed like every third command returned an error message.
Everything works fine now, but it would have been a lot easier to back up, dump the original setup, create the new raid5 setup and restore the installation, hence my suggestion above.
|All times are GMT -5. The time now is 09:40 PM.|