LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices

Reply
 
Search this Thread
Old 10-28-2008, 06:49 PM   #1
HighLife
Member
 
Registered: Feb 2003
Posts: 36

Rep: Reputation: 15
Moving RAID1 array to fresh OS installation?


I have a network fileserver running Centos 4. It has an IDE drive and 2 SATA drives. The 2 SATA drives are configured in RAID1(/dev/md0) using mdadm and mounted as /usr/data, its where the network shares are located, the IDE drive contains everything else(ie, the OS).

Now the machine froze up a couple of days ago, when I rebooted it I was forced to run fsck which returned a bunch of short read errors. I ran it again with fsck -y to try and repair them, it appeared to repair them all but one keeps coming back and forcing fsck to run everytime I reboot - I suspect the IDE drive is stuffed, I'm hoping /dev/md0 is fine! I ran fsck on /dev/md0 and it reported back as clean *fingers crossed*.

I am going to run some disk diagnostic tools on the IDE drive tonight but am assuming at this stage the best way to fix it would be to replace the IDE drive and do a fresh install - then re-mount the RAID1 array to the new installation.

The part I am not sure on is how I go about moving the RAID1 array to the new disc and fresh install? Any suggestions would be appreciated!
 
Old 10-29-2008, 11:59 AM   #2
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,534
Blog Entries: 12

Rep: Reputation: 171Reputation: 171
If you're just replacing the IDE drive with the OS, then I would say you need to do nothing except remount the RAID array after you reinstall the OS. No migrating necessary. If you want to migrate the data on the RAID, safest would be to back it up first, then you have a variety of paths open. If you could install more disks to the system, you could add them to the array then subtract off the old ones. If you can't physically add more disks, you could break the mirror (subtract first), then add a new disk to the array...

Maybe I don't understand the problem....
 
Old 10-29-2008, 05:31 PM   #3
HighLife
Member
 
Registered: Feb 2003
Posts: 36

Original Poster
Rep: Reputation: 15
Ok, I thought that being software RAID - that the RAID configuration stuff for mdadm would be on the boot drive(ie. the drive that has failed)?

I was thinking that to boot from a fresh installation on a new drive and be able to see the RAID array I would need to run some mdadm reconfigure or something to initialise/reconfigure the array on the new boot drive?

Or is all the software RAID configuration contained in the actual array itself? If thats the case it will be easier than I thought to get back up and running!
 
Old 10-29-2008, 06:13 PM   #4
HighLife
Member
 
Registered: Feb 2003
Posts: 36

Original Poster
Rep: Reputation: 15
Ok, I done a little more research, as I understand it now, mdadm uses a "persitant superblock" on the array itself to store the array configuration. So with this I should be able rebuild the failed root partition drive, plug the array back in and it should be detected, then just remount the array and I'm back in business?

Please inform me if there is more to it than this, I dont want to experiment too much with 400GB of important data on the array!
 
Old 10-30-2008, 06:02 AM   #5
HighLife
Member
 
Registered: Feb 2003
Posts: 36

Original Poster
Rep: Reputation: 15
Well have struck another problem. I bought a new SATA drive to replace the failed IDE drive for the root filesystem.

I figured I had 2 options:

1. remove the array and install fresh OS on the new drive. Then re-attach the array and hope it boots up and detects it.

2. run the install with the array attached and hopefully manage to install on the fresh drive leaving the array intact.

I went with the first option as it seemed the safest to try first.

I had the two "array" drives plugged into SATA1 and SATA2 on the motherboard, I removed them, plugged the new drive in and installed the OS. Now when I reboot with the array plugged back into its original position(SATA1 & SATA2), it goes no where. I think the problem is that when I installed the OS without the other drives, it was /dev/sd0, but when the array is plugged back in, one of its drives is now /dev/sd0 and the OS drive probably /dev/sd2?

Will changing the SATA ports which the array was previously using cause any issues? Maybe I should have just bought an IDE drive!
 
Old 10-30-2008, 11:44 AM   #6
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,534
Blog Entries: 12

Rep: Reputation: 171Reputation: 171
Quote:
mdadm uses a "persitant superblock" on the array itself to store the array configuration. So with this I should be able rebuild the failed root partition drive, plug the array back in and it should be detected, then just remount the array and I'm back in business
That's about the size of it. You might have to resconstruct or save mdadm.conf, which is on the root drive, but mdadm can do that after assembling (not creating) the array.

Quote:
I think the problem is that when I installed the OS without the other drives, it was /dev/sd0, but when the array is plugged back in, one of its drives is now /dev/sd0 and the OS drive probably /dev/sd2?

Will changing the SATA ports which the array was previously using cause any issues? Maybe I should have just bought an IDE drive!
I agree. If switching doesn't help, you might look at your bios set up and see if you can determine who's who. You can post your fdisk -l and more info if you get stuck...
 
Old 10-30-2008, 12:13 PM   #7
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Slackware 14.1 (multilib) with kernel 3.15.5
Posts: 1,534
Blog Entries: 12

Rep: Reputation: 171Reputation: 171
Try to make the boot drive /dev/sd0. It doesn't matter where the array devices are; when you re-assemble the pieces you'll just have to tell mdadm where they are!
e.g. mdadm --assemble /dev/md0 /dev/sd1 /dev/sd2
if they end up /dev/sd1 and /dev/sd2...

Last edited by mostlyharmless; 10-30-2008 at 12:15 PM. Reason: don't need other options for mdadm with persistent superblock
 
Old 10-30-2008, 05:06 PM   #8
HighLife
Member
 
Registered: Feb 2003
Posts: 36

Original Poster
Rep: Reputation: 15
Cheers thanks for the advice - I'll give it a go.
 
  


Reply

Tags
centos, ide, mdadm, raid1, sata


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID1 array rebuild fails at 99.9% recovery apomatix Linux - Hardware 3 06-06-2008 06:30 AM
adding drives to an existing raid1, EXT3 array` huggy77 Linux - Newbie 4 10-19-2007 12:38 PM
add a new disk to raid1 array retrev Linux - General 1 04-08-2007 01:57 AM
rebuilding a drive kicked from RAID1 array ocularbob Linux - Software 1 02-12-2007 10:12 PM
I don't think I crated a Raid1 array Jan Tanjo Linux - General 5 10-27-2006 08:17 AM


All times are GMT -5. The time now is 03:06 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration