Sounds like the installer is installing to the RAM disk until you run out of memory.
Linux support for FakeRAID is basically nonexistent.
mdadm can access Intel RAID sets, but no other metadata types are, or is likely to ever be, supported.
dmraid can activate RAID 1 sets created by a wide range of controllers, but will choke on RAID 5 sets due to the current device mapper RAID456 module being incompatible. Some Linux distributions use custom kernels with a backported RAID45 module, but it's basically a real mess.
I actually have a number of Slackware systems booting from
dmraid-driven fakeRAID RAID 1 sets. To get this to work, this is the procedure I followed:
- Create the RAID set using the controller firmware setup routing
- Boot from a System Rescue CD (which contains dmraid and create partitions
- Power down the system and unplug one drive
- Boot from the Slackware install DVD and install to the existing partitions on /dev/sda
- Replace all partition references in /etc/fstab with UUIDs or labels
- Download and compile dmraid
- Create an initrd with dmraid (the init script must be modified to include the /sbin/dmraid -ay command)
- Create two entries in lilo.conf; one with an initrd and one that boots directly to /dev/sdan
- Power down, plug the 2nd drive back in and resync the RAID array
Steps 1-4 ensures that partitions are created within the limits imposed by the fakeRAID controller. A RAID 1 set is just two drives with some RAID metadata at the end, so once the set and the partitions have been created, either drive can be used as if it was a single non-RAID drive.
The result is a system that boots off a BIOS-supported fakeRAID array, but can never be directly upgraded to a more recent kernel as
lilo throws a fit when it sees the device mapper root device. The only way to update the
lilo boot sector is to temporarily boot to the non-dmraid entry, run
lilo, reboot and resync the RAID.
Alternatively, unplugging a drive will also do the trick, as
lilo seems perfectly happy to write to a degraded
dmraid set.
All this to ensure the system will still boot from a RAID 1 fakeRAID array with a failed drive. And of course, none of this will work with a RAID 0 set as both drives are needed at all times, or a RAID5/6 set as
dmraid won't be able to assemble the array due to the incompatible kernel module.
Edit: To figure out what the Slackware installer is actually doing, switch to another console and type
mount. If all is well, the RAID device should be mounted to the
/mnt directory. I'm willing to bet it isn't, which means you're installing to the RAM disk.