Raid5 boot issues
Hello everyone,
I'm somewhat stuck:) I fiddle with intel ss4200 NAS where I managed to install Slack 13.1 on some spare IDE HDD that was lying around (instead of that crappy 256 MB DOM). Anyway, everything works except one thing... The setup is: 1x IDE HDD with Slackware on it. 4 x 1TB drives in RAID 5, which when mounted make 3 different logical drives. Everything works, /dev/md0 is created, pvc on top of it with 3 different drives. Well at least until I reboot :) After reboot /dev/md0 is not automatically assembled and due to that - lvm stays inactive. Of course, I can make a script and put it in rc.local that will activate and mount what I want but I'm sure that there is one more elegant solution for that. At the moment I need to issue: mdadm --assemble /dev/md0, vgchange -ay, mount -a and vgchange -an at shutdown. I checked parts concerning LVM in rc.S but I'm clueless. Kernel on the system is 2.6.36, mdadm - v2.6.9 and LVM version: 2.02.64 So... if there is someone with an enlightening solution, please let me know. Regards. |
Use an initrd for boot...
Excerpt from the mkinitrd manual: -L This option adds LVM support to the initrd, if the tools are available on the system. -R This option adds RAID support to the initrd, if a static mdadm binary is available on the system. |
I tried with new initrd with the same result. At the end, I don't need LVM support at boot, but after initrd finishes loading. Or at least I think so because my system is not installed on that raid array. Slack has its own hdd all for himself. :)
Regards, S. |
All times are GMT -5. The time now is 07:01 AM. |