I have a KT7-RAID motherboard which uses the hpt370 driver. Currently running linux kernel 2.4.31 + slackware 10.1 with the ATARAID hpt370 driver. I want to upgrade to a 2.6 kernel, but unfortunately it does not support the RAID feature of the HPT370. So I set out to upgrade my existing 2.4.31 to the extent that it would support the raid array using the "new" way in preperation for a painless upgrade to 2.6.
So after much reading I figured out I needed dmraid, and have managed to get it working. In addition I use this raid as my boot/root device and this is where my problems come in, just for completness this is the short version of what I've done so far.....
1. downloaded device mapper, dmraid, lilo patch, configured, make, make install on all.
2. Patched kernel, enabled device mapper in addition to hpt370 driver, and hpt370 ATARAID driver, built MD raid tools as modules.
( #2 actually came before parts of #1, but needless to say all went well here at this point )
3. ran dmraid to create /dev/mapper entries. No problems here... raid partitions are detected and activated and mountable at this point, at least to the extent I could unmount my swap partition from /dev/ataraid/d0p5 and mount /dev/mapper/swappartition in its place.
4. created boot image using mkinitrd, populated initrd-tree with dmraid, dmsetup, needed libs, modified linuxrc to load "dmraid -ay"
( incidentally running dmraid -ay produces alot of "unable to mount sysfs partitions" due to the lack of udev support by the 2.4 kernel, but seems to be able to find them ok looking in normal dev, based on what I've read this doesn't seem to be a technical problem, and is just annoying )
5. modified lilo to include initrd, boot=/dev/mapper/partitions and also root=/dev/mapper/partitions and then ran "lilo" to update boot, no problems here.
6. Updated fstab and mtab to point to /dev/mapper partitions
8. I boot into the new kernel and then immediately after it states "Freeing unused memory from kernel etc..etc.." the system simply hangs. There are no errors or any indications of a problem otherwise. I see that dmraid and everything that should load gets loaded at that point. After that last message it should then procede to boot from the main / filesystem.
At this point, I'm kinda stumped. My best guess is that it either can't read /sbin/init for some reason or worse.... Another thought I just had while writing this is maybe this is related to the VFS lock patch? So far I've only found vague references to this and the INIT function. During the device mapper source install I couldn't figure out a reason to use the VFS lock patch, so I didn't use it. I dunno... read more I must
I write this though hoping someone out there can help me on this last mile... I've got 3 days so far into this little blackhole project
I'm also wondering at this point if my life would be entirely easier and faster performance wise just reformatting the whole system (its a fairly new install anyway) and setting up the highpoint to use linux software raid instead of the bios implementation of it. But I'm worried that would be slower?