While I won't profess to be authoritative on the subject, it would appear the you are trying to twist up your mount points around a 'unique' RAID implementation. My guess is that you've got redhat and it partitioned your drives so that you can divide up the high demand mount directories (/var /usr /usr/local) on different physical devices. This is a good strategy for a basic web server. It also allows you some control for security purposes as outlined in the "secure_webserver" how-to over at suse.com.
Setting up software RAID can help improve performance and and achieve a reasonable level of availability.
However this setup appears to defeat the above benefits.
If you are trying to setup mounts for enhaced security or performance, dot it without RAID or on a physical device that is not part of the RAID array. The gains made by using RAID are lost when you try to make your drives participate in several different arrays. It looks like you may have been confused by the docs and the default install as it appears that you have FIVE 'multiple-devices'. So let's go back to basics...
First - the obligatory nag to go see the HowTo. http://www.ibiblio.org/pub/Linux/doc...AID-HOWTO.html
It's the one that got me going.
Here's the quick and dirty...
Think of a multiple device directive like "/dev/md0" as a container for the physical devices that participate in the array. If your physical partitions are all on the same drive, you won't have any redundancy, and you should get LOUSY performance as each stripe in the set has to wait for the drive to finish reading/writing the previous stripe. If each partition is on a different physical device you get the benefits of redundancy and performance - especially with your SCSI setup as it more than likely supports disconnect. Your controller can write each stripe in parallel. Your software RAID can distubute the i/o to all the devices allowing greater (theoretical) throughput.
With your three physical drives you could set up your system in one of the following ways; RAID and multiple mount points. Here's how my RAID5 setups usually go...
1) Partition each of your /sdX with three partitions. To keep things simple, make each partition the same size on each drive. ie: if you make SDA1=24M, make SDB1 and SDC1 24M as well. This keeps life simple and redundant. So...
...Make one for the /boot (sda1, sdb1, sdc1) of about 24M. That gives you lots of room for your system.map, vmlinuz, etc. Mark this partition as type 83 'Linux'
...Another partition for swap space (yes, on each drive). nobody has run up and slapped me for using swap space yet, so I still use it. Mark this partition as type 82 "Linux Swap"
...A third partition for your RAID (yes, on each drive). Use up the rest of your disk (on each one) for this one. Mark them all as type fd "Linux RAID Autodetect".
Format your /boot as an ext2 filesystem ( it should default to 2048 block size as it is a small partition)
Your install routine should 'mkswap' your sdX2 parts and put them in the /etc/fstab as such.
If your install script will support a RAID install then we can move on, but I'm working blind here as I'm not a redhatter. Use the three partitions that you marked as type fd "Linux RAID Auto" for your /dev/md. I use /dev/md0 just because it's what I'm used to. Your install routine should be able to 'mkraid' and format it. from there you can install the system to / (root) on /dev/md0. If you want a real challenge you could use LVM (Linux Volume Manager) to create the different logical volumes you seem to want for your multiple mount points. I can't say I'd recommend this. I use LVM, but only for spanning multiple physical devices (sweet volume manager!) or mupltiple /dev/mdX devices (and I've only done that once).
If your install routine won't build the array for you, just make your system install everything to the first disk only my mounting your first partition on /boot and the root "/" partition on the Linux RAID Auto part you made. Your fstab should look something like this (I left out the mount options as I'm sure your distro will take care of this for you.
/dev/sda1 /boot ext2 defaults
/dev/sda2 swap swap
/dev/sda3 / ext2
Personally I recommend reiserfs for your root partition if your distro supports it out of the box (Plug for SuSE!!!) . I also use resierfs on my RAID devices as well, although I may encounter difficluties if someone ever gets a good RAID resize utility up and running.
Once your rig is up and runnig on the ONE drive, you can build your RAID array as outlined in the Root RAID section in the afformentioned How-To. I'll outline it here.
Build your /etc/raidtab file. DO NOT make your /dev/sda3 partition the first device entry! This is our current root device and a little bug in mkraid won't let us mark it as a failed-disk. (It's not really failed, but we are tricking mkraid so we can build the filesystem on RAID.) DON'T use the "***" in your raidtab file! They are only there to hi-lite the failed-drive directive.
***** failed-disk 1
'mkraid' as per the How-To.
Format as per the How-To, or use reiserfs (I did)
Mount the new /dev/md0 to somewhere handy like /mnt
Copy your stuff from your current root as per the How-To
I made a boot floppy as I am a dumb-ass when it comes to lilo. Tell the boot floppy to use /dev/md0 as the root filesystem and obviously have RAID compiled into the kernel.
I don't make any changes to my current lilo setup until I confirm that I can boot to my crippled RAID device from the floppy.
OK.. now it looks like I'm being a gas bag. Start with this and post if you are still in grief.