LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (http://www.linuxquestions.org/questions/linux-software-2/)
-   -   Migrating to a RAID 1 setup (http://www.linuxquestions.org/questions/linux-software-2/migrating-to-a-raid-1-setup-438019/)

Danathar 04-23-2006 11:14 AM

Migrating to a RAID 1 setup
 
I have a Slackware 10.2 system (but this question really apples to any distribution).

I picked up a used 3ware 7500-4LP RAID card and would like to use it with my current LINUX system which is currently configured on a 30GB drive.

What is the proper way to migrate the system? I've never migrated a non-raid installed system to a RAID setup.

I'd like to avoid doing a fresh install and then copying over config files and filesystems

Searching the forums comes up with several questions regaurding software RAID and/or adding a non-system fileystem s(other than root) RAID enabled, but nothing that I can find explains how to do what I'm contemplating
.

thanks for any help (in advance)

Doug

ScottReed 04-23-2006 02:05 PM

I don't know anything about 3ware RAID cards, but the first thing you need to do is make sure the linux kernel supports them.

1 - Check for support. If supported continue...
2 - Recompile kernel with support for your card
3 - Configure your boot loader to add the new kernel so you can boot it (take care NOT to wipe out your existing kernel!!)
4 - Power off system. Install card. Hookup your existing drive to it.
5 - Turn on system. Boot new kernel. If all goes well it should boot fine. If you recieve a kernel panic it will most likely be due to the hard drive device name changing (because of the RAID card) and your /etc/fstab will need to be edited. If this is the case you will need to boot from your distros cd, mount your root fs and edit the fstab to reflect the device name change. Reboot once again.

Once you can succesfully boot the new kernel with the ONE drive hooked up to the card:

- Shutdown system. Power off.
- Hook up second drive to RAID card.
- Turn on system. Press whatever key combo it is to enter your cards config utility. Follow instructions to mirror one drive to another. Basically mirror your existing drive to the new drive. You SHOULD be using exact same drives btw!
- Some RAID utilities will allow the system to mirror while you continue to work. Others don't. The process could take a while.
- When it's done you now have a working hardware RAID setup.

Hope this helps

Scott

Danathar 04-23-2006 04:12 PM

Quote:

Originally Posted by ScottReed
I don't know anything about 3ware RAID cards, but the first thing you need to do is make sure the linux kernel supports them.

1 - Check for support. If supported continue...
2 - Recompile kernel with support for your card
3 - Configure your boot loader to add the new kernel so you can boot it (take care NOT to wipe out your existing kernel!!)
4 - Power off system. Install card. Hookup your existing drive to it.
5 - Turn on system. Boot new kernel. If all goes well it should boot fine. If you recieve a kernel panic it will most likely be due to the hard drive device name changing (because of the RAID card) and your /etc/fstab will need to be edited. If this is the case you will need to boot from your distros cd, mount your root fs and edit the fstab to reflect the device name change. Reboot once again.

Once you can succesfully boot the new kernel with the ONE drive hooked up to the card:

- Shutdown system. Power off.
- Hook up second drive to RAID card.
- Turn on system. Press whatever key combo it is to enter your cards config utility. Follow instructions to mirror one drive to another. Basically mirror your existing drive to the new drive. You SHOULD be using exact same drives btw!
- Some RAID utilities will allow the system to mirror while you continue to work. Others don't. The process could take a while.
- When it's done you now have a working hardware RAID setup.

Hope this helps

Scott


Thanks! That's exactly what I needed to know. I may have to go out and buy two identical drives in order to get it running right though

michaelb3 06-04-2006 02:57 AM

HAhaha, let me save you A WORLD of grief.

Unfortunately, I own two 7506-8 cards from 3ware and totally bought into the hype about hardware RAID. When the cards work, they are fantastically great. But it only takes one teeny, weeny hiccup to send you flying wacky-willy down a path of devastating data loss.

The data is encoded in some bizarre-ass format. If you have any problems whatsoever that the mostly-sparse and cryptic 3ware bios can not handle, you are totally, 100% sh*t out of luck.

Unless you build the array with a hot-spare, it is impossible to install and add new drives to the system. This is just beyond stupid. It means that if you have a two-drive RAID1 and one drive dies, if you had not built the array with a third "hot spare" drive, then you can not add it after you lose 1 RAID1 drive and expect to just rebuild the array. It does not work. You are forced to copy everything over to some other machine, install the new (third) drive, delete the array, and restart all over. This is also true for RAID5 (both of which I tested and smacked into the 3ware brick wall).

I hated linux software RAID on redhat6 (last time I tried it, prior to TONIGHT) because it ruined bunches of data. But that was way back when, in the dev-days. I am still mighty, mighty leery of freebie software RAID code (aka linux), but my bro pointed out that the cheapskate author-gnomes house their own data using the software RAID, and they tend to covet that data almost more than life itself -- meaning, it probably works.

The TWO single most important points about 3ware hardware RAID vs linux software RAID are that

(1) 3ware recovery/rebuild is a vanishingly narrow road --a spider-thread really-- of what will work, and numerous landmines exist to totally and instantly and irrevocably wipe all your data if you make one tiny misstep (usually, the first misstep is irrecoverable, and it forces you down a path of doom and data loss, as happened to me twice), this single fact alone makes using 3ware anything for any purpose whatsoever akin to betting your entire data life on being able to double-differential calculus, under pressure, with a gun to your head, and no internet connection or textbooks -- ie, totally, 100%, positive not worth the clear and present danger; can you tell I hate 3ware? In fact, extend that to all hardware RAID solutions, none of which appear to be any safer, including the nvidia raid0 and all adaptec. ...similar horror stories.

(2) as crappy as lniux software RAID --MIGHT-- be (or might have been in its early stages), it does do one thing --a most critical thing-- absolutely right: if any part of the RAID system fscks up, at least your data is left in a readable format so that, at the very worst, you unplug your good drive and plug it into another machine and it will boot up as a regular IDE hard drive. This single fact alone makes all the difference.

As for me, I've totally given up on all this RAID sh*t, none of which works. That's a broad and fervent dismissal, I realize, but in these days of hundred-gig drives, it's no longer losing some work, it's losing lifetimes of work that are at risk.

I'm back to single drives running ext3, auto-rsync'd each night to other drives in separate machines. This sucks for saving configs and custom installs, especially for stupidly hard things to isntall like DJB-anything, but it is the only system that I've found in more than ten years that is 100% dependable.

Your mileage may vary.


All times are GMT -5. The time now is 09:19 AM.