LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   Linux IDE RAID Help (https://www.linuxquestions.org/questions/linux-software-2/linux-ide-raid-help-434088/)

belvedere 04-11-2006 10:03 AM

Linux IDE RAID Help
 
Hello everyone.

I have tried to avoid having to post this inquiry but can't seem to find any documentation that applies to my situation/problem.

All I'm trying to do is simply boot to a IDE RAID array.
Basically I've been playing with various distributions in an attempt to get a spare machine of mine to successfully boot to a RAID array in Linux. I've had very mixed results. On Fedora Core 5 it actually sees my RAID array from the get go as one drive, it doesn't actually see the two disks that make up the array. Every other distro, however, sees all available disks. On various distro's I am able to setup my RAID partitions and proceed through the install without any problems. Once the machine reboots though it fails to load the OS. Now I've read up on problems with Lilo/Grub booting to RAID arrays but they typically seem to be related to the boot partition not properly being defined in the boot loader config. I guess I just have a couple of simple questions to help guide me to my answer.

1.) Linux uses software RAID, so do I still need my array defined in the RAID bios? Will this conflict with the way Linux manages the array via software?

2.) I've read about issues with the 2.6 kernel and RAID. Is this part of the problem?

3.) Is this possible to get working with the default partitioning tools (diskdruid/fdisk etc), or is more involved configuration required?

macemoneta 04-11-2006 11:23 AM

If you define the drives as a RAID array in the controller BIOS, then Linux should see it as a single drive. If you leave it as individual drives, then you can use software RAID.

Unless you paid a couple of hundred for your RAID controller, it is using software either way (a Linux/Windows driver is actually performing the RAID functionality). In that case, it's better to use Linux software RAID, because you have better visibility and control over the array. The drives also become controller independent - you can use any controller from any manufacturer, and the RAID array will still work. That's not the case if you are using the controller vendor's driver - even though it's really software.

belvedere 04-11-2006 12:15 PM

Quote:

Originally Posted by macemoneta
If you define the drives as a RAID array in the controller BIOS, then Linux should see it as a single drive. If you leave it as individual drives, then you can use software RAID.

Unless you paid a couple of hundred for your RAID controller, it is using software either way (a Linux/Windows driver is actually performing the RAID functionality). In that case, it's better to use Linux software RAID, because you have better visibility and control over the array. The drives also become controller independent - you can use any controller from any manufacturer, and the RAID array will still work. That's not the case if you are using the controller vendor's driver - even though it's really software.


Well I've got an integrated promise controller on my DFI AD70-SR motherboard. I thought that would be considered hardware RAID? Or are you saying its still considered software similar to a Winmodem or something?

I'm just not understanding why it will install to the RAID array without problem, but not boot to it? By installing to the RAID array that proves that the array is functional, but the machine not booting doesn't make much sense.

Slick666 04-11-2006 12:32 PM

If you are using a hardware raid (which It sound like you are) when you typically configure it in the bios or with a utility disk of some kind. Once this is done the hardware simulates a disk to anything that wishes to access the raid and it takes care of everything behind the scene.
Once you define the raid drive all you need are the correct drivers to access this. (I know in slackware you need to select raid.s and a startup kernel) I've setup slackware on a couple of Raid computers and the only trouble I've had is pointing cfdisk to the correct drive. (ex. cfdisk /dev/sdb1). Other than that it's pretty much the same as any other install. I treat is as a single disk and the raid controller handles all the messy part behind the scene. I setup lilo as a default setting in the MBR.
I recomend using the hardware raid. If you already have the hardware to offload all the claculations to why waste resources in Linux to do the same thing.

I hope this helps

macemoneta 04-11-2006 12:40 PM

Yes, it's just like a winmodem. The actual function is provided by the driver, which runs on the PC's CPU. That creates a proprietary, non-portable RAID array.

You have to follow all the steps in the correct order, and then you may need to boot a rescue CD and manually update the MBR (master boot record)on each drive in the array. For example,

If you have the drive configured as RAID in the controller BIOS, and Linux sees it as a single drive, then the normal installation should work, and Linux should boot. There is usually a /proc entry created by the driver that shows the status of the array.

If you have the drive configured as JBD (just a bunch of disks - non-RAID) in the controller or Linux sees them as individual drives, then you install Linux software RAID. After installation, boot to rescue mode on the installation media and write the MBR to each drive. Your distribution should document the specific steps to do this with their media, but a few Googles should turn up the instructions as well.

dgtlpulse2k 04-11-2006 01:25 PM

Ok, Here the a couple questions that I have?


First, Is there a website that shows what chipsets provide TRUE raid functionality.

2nd. I have a Nvidia Nforce 4 motherboard installing Fedora Core 3 2.6 Kernel, When I start this, I create a RAID0 partition with the RAID BIOS util. Then when fedora boots inside the partition manager it sees each device independently. Why?

Then, I manually create the partition scheme. I can elaborate if anyone needs me to, but pretty much I create the software raid partitions and make them match up and it ends up


/md0 - RAID1 Mirroring /boot
/md1 - RAID0 STRIPE / root partition (Now this is another question. Why am i creating "software raid" partitions that mismatch. IE, /md0 is a RAID1 MIRROR /md1 is RAID0 Stripe. How can that work? It's the same logical array to the hardware I would think????


Then, the Gentoo Raid utils boot cd works great, I can boot off the cd then access the drive via

/proc/mdstats.... I see the array syncing up, I have let it sync but still cannot boot.. Many many questions?

macemoneta 04-11-2006 02:15 PM

Both hardware and software RAID are "true" RAID. The difference is only where processing is performed (on the controller or the PC CPU). Several manufacturers sell controllers where all processing is performed on the controller, with 3ware being one of the most well known.

If the Linux system sees each device independently, then the specific controller is not supported. You may need to supply a Linux driver disk during the installation process (see the installation documentation for each distribution), or there may simply not be a driver at the moment (none supplied by the vendor, none reverse-engineered by the community).

If you are going to install software RAID, you want the BIOS on the controller configured to JBD - otherwise there may be problems.

Software RAID is generally more flexible because the Linux kernel is more capable than the simple firmware in the controllers, allowing each partition to have different RAID configurations. You can mix RAID-0, RAID-1 and RAID-5 as you need to.

If you can't boot, then:

1. Make sure both drives are bootable in the PC (not the disk controller) BIOS
2. You may need to manually update the MBR with GRUB on each drive in a bootable RAID-1 mirror. Check your distribution's installation instructions for bootable RAID-1.


All times are GMT -5. The time now is 09:01 AM.