LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (http://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Moving RAID1 array to fresh OS installation? (http://www.linuxquestions.org/questions/linux-newbie-8/moving-raid1-array-to-fresh-os-installation-679653/)

HighLife 10-28-2008 06:49 PM

Moving RAID1 array to fresh OS installation?
 
I have a network fileserver running Centos 4. It has an IDE drive and 2 SATA drives. The 2 SATA drives are configured in RAID1(/dev/md0) using mdadm and mounted as /usr/data, its where the network shares are located, the IDE drive contains everything else(ie, the OS).

Now the machine froze up a couple of days ago, when I rebooted it I was forced to run fsck which returned a bunch of short read errors. I ran it again with fsck -y to try and repair them, it appeared to repair them all but one keeps coming back and forcing fsck to run everytime I reboot - I suspect the IDE drive is stuffed, I'm hoping /dev/md0 is fine! I ran fsck on /dev/md0 and it reported back as clean *fingers crossed*.

I am going to run some disk diagnostic tools on the IDE drive tonight but am assuming at this stage the best way to fix it would be to replace the IDE drive and do a fresh install - then re-mount the RAID1 array to the new installation.

The part I am not sure on is how I go about moving the RAID1 array to the new disc and fresh install? Any suggestions would be appreciated!

mostlyharmless 10-29-2008 11:59 AM

If you're just replacing the IDE drive with the OS, then I would say you need to do nothing except remount the RAID array after you reinstall the OS. No migrating necessary. If you want to migrate the data on the RAID, safest would be to back it up first, then you have a variety of paths open. If you could install more disks to the system, you could add them to the array then subtract off the old ones. If you can't physically add more disks, you could break the mirror (subtract first), then add a new disk to the array...

Maybe I don't understand the problem....

HighLife 10-29-2008 05:31 PM

Ok, I thought that being software RAID - that the RAID configuration stuff for mdadm would be on the boot drive(ie. the drive that has failed)?

I was thinking that to boot from a fresh installation on a new drive and be able to see the RAID array I would need to run some mdadm reconfigure or something to initialise/reconfigure the array on the new boot drive?

Or is all the software RAID configuration contained in the actual array itself? If thats the case it will be easier than I thought to get back up and running!

HighLife 10-29-2008 06:13 PM

Ok, I done a little more research, as I understand it now, mdadm uses a "persitant superblock" on the array itself to store the array configuration. So with this I should be able rebuild the failed root partition drive, plug the array back in and it should be detected, then just remount the array and I'm back in business?

Please inform me if there is more to it than this, I dont want to experiment too much with 400GB of important data on the array!

HighLife 10-30-2008 06:02 AM

Well have struck another problem. I bought a new SATA drive to replace the failed IDE drive for the root filesystem.

I figured I had 2 options:

1. remove the array and install fresh OS on the new drive. Then re-attach the array and hope it boots up and detects it.

2. run the install with the array attached and hopefully manage to install on the fresh drive leaving the array intact.

I went with the first option as it seemed the safest to try first.

I had the two "array" drives plugged into SATA1 and SATA2 on the motherboard, I removed them, plugged the new drive in and installed the OS. Now when I reboot with the array plugged back into its original position(SATA1 & SATA2), it goes no where. I think the problem is that when I installed the OS without the other drives, it was /dev/sd0, but when the array is plugged back in, one of its drives is now /dev/sd0 and the OS drive probably /dev/sd2?

Will changing the SATA ports which the array was previously using cause any issues? Maybe I should have just bought an IDE drive!

mostlyharmless 10-30-2008 11:44 AM

Quote:

mdadm uses a "persitant superblock" on the array itself to store the array configuration. So with this I should be able rebuild the failed root partition drive, plug the array back in and it should be detected, then just remount the array and I'm back in business
That's about the size of it. You might have to resconstruct or save mdadm.conf, which is on the root drive, but mdadm can do that after assembling (not creating) the array.

Quote:

I think the problem is that when I installed the OS without the other drives, it was /dev/sd0, but when the array is plugged back in, one of its drives is now /dev/sd0 and the OS drive probably /dev/sd2?

Will changing the SATA ports which the array was previously using cause any issues? Maybe I should have just bought an IDE drive!
I agree. :) If switching doesn't help, you might look at your bios set up and see if you can determine who's who. You can post your fdisk -l and more info if you get stuck...

mostlyharmless 10-30-2008 12:13 PM

Try to make the boot drive /dev/sd0. It doesn't matter where the array devices are; when you re-assemble the pieces you'll just have to tell mdadm where they are!
e.g. mdadm --assemble /dev/md0 /dev/sd1 /dev/sd2
if they end up /dev/sd1 and /dev/sd2...

HighLife 10-30-2008 05:06 PM

Cheers thanks for the advice - I'll give it a go.


All times are GMT -5. The time now is 03:39 AM.