I think what you have is a "Fake Hardware RAID" controller. That means the RAID functions are done by a proprietary driver (from Adaptec) for each operating system. The Adaptec site has drivers for SuSE and Red Hat but those are quite old (2006). I doubt that you will find RAID drivers compatible with any current Linux distro.
Adaptec also has a "SHIM" source code package that could be used to write a driver for modern kernels and possibly Slackware.
Adaptec SATA HostRAID SHIM Package Unless you are comfortable modifying or writing kernel drivers, I don't recommend trying this.
That was the bad news. Now here is the good news. Most "Fake RAID" controllers are just a "normal" ATA or SATA controller chip with an extra BIOS ROM and special driver software in each OS. Linux usually has "normal" (non-raid) SATA drivers will work with the controller. You can use the controller and disks as normal (non-RAID disks) in Linux. You will have to re-format the disks, so back any existing data first! If you want RAID then you can use the Linux "mdadm" program to create Linux software RAID arrays. Linux software RAID has similar performance to the Fake Hardware RAID.
There are a few reasons why you might want to use Fake Hardware RAID instead of the Linux software RAID.
- You want to boot from a RAID array
- More than one operating system will access the same array (Windows & Linux)
- You need to "rescue" the existing data from the array
- Some other operating system (Windows) requires that RAID setup
You can use a program called "dmraid" to detect the Fake Hardware RAID arrays and configure Linux to access them. I do that with my older Promise FastTrack RAID controller. Essentially "dmraid" detects the RAID metadata on the "raw" disks and configures the Linux "Device Mapper" with the proper disk array layout. Linux can use the normal SATA drivers to access the disks, and the "Device Mapper" does the RAID functions. This only works if there is a "normal" SATA driver in Linux that works with the disk controller chip on the controller card. The proprietary driver from Adaptec isn't required when using "dmraid". The "dmraid" program does not work with every Fake RAID metadata format. I believe that the Adaptec RAID is supported.
Don't confuse "Device Mapper" with the "Mapper Device". The "dmraid" software and the program "dmsetup" are used to manage "Fake Hardware RAID" devices. The "mdadm" command is NOT used for this kind of RAID setup.
Your RAID arrays and partitions will use device names in this format.
Code:
/dev/mapper/adp_cgbibceck
/dev/mapper/adp_cgbibceckp1
/dev/mapper/adp_cgbibceckp2
/dev/mapper/adp_cgbibceckp5
The exact name for each array is random, but each name is unique, and will always be the same every time the system is booted. The name that does not end in a number is the name for an array. The names ending in numbers are for partitions.
There is a second set of device names created for "Device Mapper" devices.
Code:
/dev/dm-0
/dev/dm-1
/dev/dm-2
/dev/dm-3
/dev/dm-4
You should not use the above names. What they refer to can change when you boot the computer or connect devices. Instead, always use the names created in "/dev/mapper".
You must also be careful to NEVER write directly to the "normal" SATA disk drives. They will still appear as Linux devices, EX: "/dev/sda", "/dev/sda1". Depending on the array layout, some or all of the devices may appear to have partitions. Do not try to mount those partitions, and do not use "mkfs" or "fsck" on those devices.
You can partition an array by specifying the array name without a partition number. Then re-detect the partitions.
Code:
cfdisk /dev/mapper/adp_cgbibceck
dmraid -ay
udevadm trigger
mkfs -t ext4 /dev/mapper/adp_cgbibceckp2
To use "dmraid" you will have to install and build the program. You can download the "dmraid sources here.
dmraid-1.0.0.rc16-18.fc18.src.rpm That is the same version of "dmraid" that I currently use. There may be newer versions. I am also happy to upload 32-bit and 64-bit "dmraid" binaries that I have built for Slackware 14.
Here are the steps to build "dmraid".
- Un-pack the source files to a directory, "EX: "/usr/src/dmraid/1.0.0.rc16-3"
- Configure the build environment
Code:
cd /usr/src/dmraid/1.0.0.rc16-3
./configure
- Build the software
- Install the software
- For 64-bit, move the library files to the correct location
Code:
mv /lib/libdevmapper* /lib64
mv /lib/device-mapper /lib64
If you later uninstall "dmraid" move the library files back to "/lib" or delete them manually.
Use this command to detect the arrays.
You will have to add that command to an initialization script that runs before the arrays are mounted. One way to do that is using an "initrd" image with a modified "init" script.
Are you going to boot from the RAID array? There are additional steps required to install Slackware to a "dmraid" array and allow it to boot from the array.
- Build "dmraid" on a compatible version of Slackware and a compatible kernel BEFORE you install Slackware. A virtual machine program such as VirtuaBox may be helpful for this.
- Copy the following files to a thumb drive, floppy or CD-ROM
Code:
/sbin/dmraid
/lib/libdmraid.so.1
/lib/libdmraid.so.1.0.0
/lib/device-mapper/libdmraid-events-isw.so
For 64-bit use "/lib64/" instead of "/lib/".
- On the target system, boot the normal Slackware setup CD
- Mount the thumb drive, CD or floppy containing the "dmraid" files
Code:
mount /dev/sr0 /mnt
- Copy the "dmraid" program and libraries to the RAM disk.
Code:
cp /mnt/dmraid /sbin
cp /mnt/libdmraid* /lib
- Dismount the disk containing the files
- Use "dmraid" to detect your arrays
Code:
dmraid -ay
ls -l /dev/mapper
- Use "fdisk" or "cfdisk" to create, modify or delete partitions
Code:
cfdisk /dev/mapper/adp_cgbibceck
NOTE: "fdisk" or "cfdisk" may display slightly different names for the partitions than expected. The important thing is the array name and the ending partition number. You may see an extra or missing "p" character before the partition number.
- Edit the Slackware setup script (so that it will detect the partitions)
Code:
cd /usr/lib/setup
vi setup
- Search for the text "if probe".
- In the following two lines of the file, change "probe" to "fdisk"
Code:
if probe -l 2> /dev/null | egrep 'Linux$' 1> /dev/null 2> /dev/null ; then
probe -l 2> /dev/null | egrep 'Linux$' | sort 1> $TMP/SeTplist 2> /dev/null
- The result should look like this
Code:
if fdisk -l 2> /dev/null | egrep 'Linux$' 1> /dev/null 2> /dev/null ; then
fdisk -l 2> /dev/null | egrep 'Linux$' | sort 1> $TMP/SeTplist 2> /dev/null
- Save the file and then run setup.
- Choose your Linux partition using the "/dev/mapper/xxxx" device name, NOT a normal disk device name
- Do not set up a swap file. You will have to do that after you have installed Slackware
- Install the desired Linux packages
- Do not install LILO. You will have to install a boot loader manually
- Exit from setup
- Un-mount the target Linux system
Code:
cd /
umount /mnt/proc
umount /mnt/sys
umount /mnt/dev
umount /mnt
Next you have to create an "initrd" image and install a boot loader. I have never attempted to use "lilo" with "dmraid" although it should be possible. I use "grub" Legacy that is included with Slackware 14 in the "extras". What I recommend is to install "grub" on some other Slackware 14 system and then create a "grub" boot CD or floppy for installing "grub" in "native" mode. You can find instructions for that with "info grub". I will also be happy to make a "grub" boot CD for you.
Here is what you need to do ahead of time on some other Slackware system to use "grub".
- Install the 32-bit "grub" package from Slackware 14
- Copy the files from "/usr/lib/grub/i386-pc" to a thumb drive, floppy or CD. To use grub on the same system, copy the files to "/boot/grub".
- Follow the instructions in "info grub" to create a "grub" boot floppy or CD-ROM
Building the "initrd" and installing the boot loader may take several attempts, I am going to explain the process from booting the Slackware install CD up until the point where you can use commands to create the "intird" or install the boot loader. You will have to use the following commands every time you boot the Slackware setup CD, in order to gain access to the Linux system you are installing.
- On the target system, boot the normal Slackware setup CD
- Mount the thumb drive, CD or floppy containing the "dmraid" files
Code:
mount /dev/sr0 /mnt
- Copy the "dmraid" program and libraries to the RAM disk.
Code:
cp /mnt/dmraid /sbin
cp /mnt/libdmraid* /lib
- Dismount the disk containing the files
- Use "dmraid" to detect your arrays
Code:
dmraid -ay
ls -l /dev/mapper
- Mount the target Linux file-system
Code:
mount -t ext4 /dev/mapper/adp_cgbibceckp1 /mnt
- Mount the device file-system
Code:
mount --bind /dev /mnt/dev
- Mount the proc file-system
Code:
mount --bind /proc /mnt/proc
- Mount the sys file-system
Code:
mount --bind /sys /mnt/sys
- Change the root to the target file-system
- Use shell commands to build the "initrd" or install the boot loader.
When you are ready to make another boot attempt, here is how you restart the system.
- Exit from the shell to change the root back to the RAM disk file-system
- Dismount the target file-system
Code:
cd /
umount /mnt/sys
umount /mnt/proc
umount /mnt/dev
umount/mnt
- Press Ctrl+Alt+Delete to reboot
Here are the steps to create the "initrd" for booting with "dmraid". You can use an "initrd" even if you aren't booting from RAID.
- Perform the steps needed to access the target Linux system being installed
- Create the files for the "initrd"
Code:
mkinitrd -k 3.2.29-smp -c -r /dev/mapper/adp_cgbibceckp1 -f ext4 -u -L
NOTE: You can skip the above step if you just want to edit or add files to an "initrd" that you have already created.
- Edit the "init" script to add the "dmraid" command
Code:
cd /boot/initrd-tree
vi init
- Search for the text "$RESCUE"
- You should add the "dmraid" command just before this line
Code:
if [ "$RESCUE" = "" ]; then
- The result should look like this
Code:
# Find any dmraid detectable partitions
dmraid -ay
if [ "$RESCUE" = "" ]; then
# Initialize RAID:
- Copy files needed by "dmraid" to the "initrd"
Code:
cd /boot/initrd-tree
cp -p /sbin/dmraid sbin
cp -a /lib/libdmraid.so* lib
- If you have written "udev" rules that you want to include in the "initrd", copy them
Code:
cp -a /etc/udev/rules.d/70-local.rules etc/udev/rules.d
- Recreate the compressed "initrd" image using the new files
- Dismount devices and reboot to test the "initrd" changes
Before you can install "grub" you have to copy the needed files to "/boot/grub" and create the "/boot/grub/menu.lst" configuration file.
- Boot the normal Slackware setup CD
- Perform the steps needed to access the target Linux system being installed
- You can remove the Slackware installation CD after booting from it
- Mount the thumb drive, floppy or CD-ROM that contains the grub files from "/usr/lib/grub/i386-pc"
Code:
mount /dev/sr0 /mnt/cdrom
- Create the "/boot/grub" directory if it does not already exist
- Copy the grub files to "/boot/grub"
Code:
cp /mnt/cdrom/* /boot/grub
- Create or edit the "menu.lst" file
Code:
cd /boot/grub
vi menu.lst
- Dismount the drive containing the "grub" files
- Put in the "grub" boot CD, disk or thumb drive so that it is ready to be booted
- Reboot the computer. Boot from the "grub" boot disk.
- When you see the "grub" menu, press "C" on the keyboard to enter "native" command mode
- You can use the "find" command to determine the device name of your Linux partition
Code:
find /boot/grub/menu.lst
find /boot/vmlinuz
- Set the default "root" device to the partition containing your Linux system
- Install "grub" to the Master Boor Record
The "setup" command specifies where to write the "grub" boot sector. Hard disks are numbered starting with 0 for the first hard disk. Partitions are also numbered starting with 0 for the first partition. A device name such as (hd0) specifies an entire hard disk (or the MBR). A device name such as (hd0,0) specifies a partition. Partitions 0 through 3 are primary partitions. Partitions 4 and above are logical drive partitions in an extended partition. For example, the second partition on the third hard disk would be "(hd2,1)". You can write the "grub" boot sector to the Master Boot Record or a partition boot sector.
Here is an example "menu.lst" file.
Code:
default 0
timeout 5
title Slackware Linux
root (hd0,0)
kernel /boot/vmlinuz vga=791 root=/dev/mapper/adp_cgbibceckp1 ro vt.default_utf8=0 load_ramdisk=1 ramdisk_size=4096
initrd /boot/initrd.gz
If you want to add a boot option for Windows it should look similar to this.
Code:
title Windows
rootnoverify (hd0,1)
chainloader +1
The last thing that I'll mention is "udev" rules for use with "dmraid". Writing "udev" rules is a little tricky because the "/dev/mapper" names are totally separate from the kernel device names "/dev/dm-X". You should not match against specific "dm-X" device names. However you can test for the kernel name "dm-" to determine if a device is a "Device Mapper" device or some other kind of device.
This is an example of how NOT to write a rule.
Code:
KERNEL == "dm-3", SYMLINK+="sdr2"
There is no guarantee which partition will be associated with "dm-3". That could change when the system is rebooted or another "Device Mapper" device is created. The result could be a corrupted RAID array or partition.
Here is a good example for creating some normal looking device names "/dev/sdr", "/dev/sdr1", etc. that correspond to the "/dev/mapper" names.
Code:
# /etc/udev/rules.d/70-local.rules: local device naming rules for udev
KERNEL!="dm-[0-9]*", GOTO="skip_arrays"
PROGRAM="/sbin/dmsetup info -c --noh -o name %N", ENV{ID_DM_NAME}="%c"
ENV{ID_DM_NAME}!="adp_cgbibceck*", GOTO="skip_array0"
ENV{ID_DM_NAME}=="adp_cgbibceck", SYMLINK+="sdr"
ENV{ID_DM_NAME}=="adp_cgbibceckp1", SYMLINK+="sdr1"
ENV{ID_DM_NAME}=="adp_cgbibceckp2", SYMLINK+="sdr2"
ENV{ID_DM_NAME}=="adp_cgbibceckp5", SYMLINK+="sdr5"
LABEL="skip_array0"
LABEL="skip_arrays"
The "dmsetup" program is used to look up the "/dev/mapper" name of a "Device Mapper" kernel device. An environment variable called "ID_DM_NAME" is set to the name. The name is then matched against rules that create the desired device names. The names appearing under "/dev/mapper" are guaranteed to be consistently assigned to the same array and partition all the time.
You can also write "udev" rules based on the partition UUIDs. Be careful to prefix the rules file name with "70-" so that rules run after the environment variable "ID_FS_UUID" has been set. If you need to run your rules earlier, change the name of the rules file and un-comment the line that refers to "blkid". That rule sets the "ID_FS_UUID" environment variable and others provided by "blkid".
Code:
# /etc/udev/rules.d/70-local.rules: local device naming rules for udev
KERNEL!="dm-[0-9]*", GOTO="skip_arrays"
#IMPORT{builtin}="blkid"
ENV{ID_FS_UUID}=="1840c578-2eeb-459f-a93b-0b1d544d20f3", SYMLINK+="sdr1"
ENV{ID_FS_UUID}=="1740c378-2aeb-459f-a93b-0b1d521c20e7", SYMLINK+="sdr2"
ENV{ID_FS_UUID}=="1247c579-2ebb-459f-a93b-0b4d576a20f5", SYMLINK+="sdr5"
LABEL="skip_arrays"
You can use the "blkid" command to find out the UUIDs of the partitions.