Originally Posted by Fett2
I'm planning on setting up a RAID 0 (I miss-typed as RAID 1 in the title) and attempting a Slackware install. It's been a few years since I've used Slackware (and linux in general). I have a motherboard with a ICH10R Southbridge which will run raid. I know this isn't a real hardware raid which will pose some difficulties in getting Linux installed.
From reading some forum posts it seems the only way I can get Slackware installed is by using mdadm. Is mdadm already included in Slackware 12.2 (I'm actually going to be attempting to install Slack64 via http://linuxtracker.org/index.php?pa...2182994a9f1e52
Will I need to do anything special besides running the mdadm tools prior to fdisk?
Sorry for my newbieness. I've never attempted installing linux on RAID before. I'm trying to get a good idea of what I'm going to expect before I buy the hard drives and try it. Seeing as how I haven't purchased the hardware yet buying a real RAID card could be an option, but I'd rather just use the fake raid if possible
I am using Slackware with ICH10 "fake RAID 0" on an ASUS P6T motherboard. I use "dmraid" for that. The standard Slackware installation CDs would not work because they didn't include the support for mirror or stripe sets or "dmraid". 12.2 may include support for mirror and stripe sets but it still does not include "dmraid".
I had to create a custom boot CD in order to install Slackware. I also had to build a Slackware kernel and compile "dmraid". I can provide more details and you can find them by looking for my posts here as well. If you decide to try this it will be much easier if you connect one non-RAID disk that you can use. Install Slackware to that disk first, and then use it to create the files to support booting from RAID.
There are basically three choices for doing RAID.
- Fake hardware RAID
- Operating System software RAID
- True Hardware RAID
Fake hardware RAID provides two main advantages. First, one can boot from the RAID array. Second all operating systems can access each other's RAID partitions (if they have drivers). Fake hardware RAID is based on proprietary software and the format of the RAID array depends on the controller (actually the BIOS firmware on the controller). RAID is implemented in software using proprietary drivers written by the manufacturer. Here is where "dmraid" comes in. The "dmraid" program is a Logical Volume Manager configuration program that can understand many proprietary RAID formats (including for ICH10). "dmraid" configures the LVM devices to access the RAID arrays and then has no further involvement in accessing the RAID.
Operating System software RAID has similar performance to "fake hardware" RAID. It uses a RAID format specific to a particular OS, such as Linux or Windows (dynamic disks). Only the operating system that formatted the RAID partitions can access them. Depending on how the RAID array is configured it might not be possible to boot from the RAID array. The advantage to this kind of RAID is that one can use standard hardware and software supported by the OS and install the OS normally. It is possible to combine different disk controllers to create RAID arrays.
True hardware RAID makes each RAID array appear to be one disk (often on a single disk controller). It offloads the work of RAID to the controller instead of the CPU doing I/O transfers to multiple disks. Like "fake hardware" RAID it requires proprietary drivers but they are close to normal disk controller drivers and often included with operating systems. This is the best way to implement RAID if the hardware is supported by the operating systems.
I will briefly describe how I configured Slackware to boot from "fake hardware" RAID on the ICH10.
I created an "initrd" RAM disk image that uses a modified "init" script to run "dmraid". It detects the RAID arrays and then passes control to the kernel. There were a few details so I created a small script to generate the "initrd" file. The changes to the "init" script were minor (only a few lines).
I had to use "grub" since it calls the BIOS that can understand the RAID array format during booting. I created a "grub" boot CD in order to install GRUB.
Since "dmraid" creates long, cryptic device names I created a "udev" rules file to assign more friendly names such as "sdr", "sdr1", etc. I had to create the root device and swap device in the "/dev" directory using a boot CD (while udev was not running) so that they would be present during kernel start-up. That allowed me to consistently use those names in the boot parameters and "fstab".
I compiled "dmraid" and built a kernel that included the stripe and mirror support with the Logical Volume Manager.
One can use "dmraid" to detect RAID arrays like this.
The program will create names in the "/dev/mapper" directory. For example:
The numbers at the very end are partition numbers but they don't necessarily correspond to the same minor device ID numbers. You can find the major and minor device ID numbers like this.
ls -l /dev/mapper/*
Note the major and minor device IDs so that you can create udev rules and files under "/dev" for booting. If you use a different kernel than the one where you ran "dmraid" the major number may change, but the minor numbers will be the same.
I had some problems with "dmraid". The version that I used is "1.0.0.rc14 (2006.11.08)". I had problems with other versions not detecting the RAID arrays. Also, I had problems with logical drive partitions in an extended partition. If there is any empty space between the partitions in the extended partition then "dmraid" will not detect partitions following the empty space. To fix that problem I put the swap file at the beginning of my extended partition (in a logical partition) and then used Linux "cfdisk" to re-create the swap partition to use the empty space left by Windows when it created the partition. If you avoid using an extended partition then you won't have the problem.
Another suggestion that I have is to format your Linux partitions using 128-byte inodes instead of the default 256-byte inodes. That will allow other software to access the partitions such as the ext2 driver for Windows.
To format the partition with 128-byte inodes you use a command like this.
mke2fs -j -I 128 /dev/sdr2
Replace the "/dev/sdr2" with the correct device. Also, leave off "-j" if you want ext2 instead of ext3 (ext2 with a journal). Make sure that you don't re-format the partition if you use the Slackware SETUP program.
Here are the special files that you will need.
Boot menu for grub
Init RAM disk image
Script to create initrd
Modified version of "init" script for initrd
Root device created before udev runs
Swap device created before udev runs
Rules to create /dev/sdr devices
dmraid program compiled for your distro
The "/dev/sdr2" and "/dev/sdr5" are just examples of names. Yours can be different. I just picked "sdr" since it is unlikely to be used by Linux for a scsi disk and is still a standard name. The number corresponds to a partition number so you should use the correct number for your configuration.
There are a few files that I didn't mention. You will need the "grub" bootloader files in "/boot/grub". You will also need the kernel sources and headers in "/usr/src/linux" in order to build the kernel.
I can post the scripts but I believe that I already posted them elsewhere in the forums. The three scripts that you have to edit (or create) are the one for "initrd" to run "dmraid", the script to make the "initrd" image and the "udev" rules file. The "grub" boot menu is essentially standard, but does include the options to use an "initrd" and the "/dev/sdr2" (or whatever) root device.
After you get all the files created using a "normal" hard disk you can copy the files to the RAID array using a boot CD or the Linux OS on the hard disk. Just use the "cp -a" command. I found that a boot CD works better and I can provide you a script to make a boot CD with "dmraid". I did build a different kernel for the boot CD that has "CD" after the kernel version.