Quote:
Originally Posted by PeggySue
I want to build a system using software raid 5. It will have one stable Debian distro, a shared /data area, a boot partition and several raid devices to install and test drive other linux distros. The Debian distro will be in charge of the boot partition.
My problem is that I can't visualize where the data that defines the raid devices is stored.
If I create arrays from partitions on all disks for /boot, swap, /data and / for Debian leaving space for other distros, I think I can build a Debian system on raid. So far so good, but when I install the next distro I will have to omit the boot install and update Debian grub to boot to the new system.
The definition for the new / partition may be md5, for example, but that won't appear in the Debian /dev folder so where is it? and how do I get Debian on raid to see distro2 on another raid device?
I have created a Debian on raid without a boot loader but I can't see it from my Mint9 which is on a separate non raid disk. update-grub in Mint doesn't find Debian on raid!!
|
Please forgive me if I am wasting your zero-reply status, but I think you approach is somewhat convoluted for what you want to accomplish. In the past I had a RAID 1 / LVM configuration that could dual boot Debian and Gentoo, and I could add any other distros I wanted fairly easily.
The overall configuration looks like so
Code:
MBR | Hard Disk 1 : Hard Disk 2 : Hard Disk 3 : Hard Disk 4... |
grub | Raid Device 1 | Raid Device 2 |
| ext3/xfs boot partition | LVM volume group |
| Debian LV, Ubuntu LV, etc. |
You can create this partitioning/layering scheme fairly easily with a Debian Net-Install disk.
With this configuration, when you want to install new OS, just create new Logical Volume, and then install OS to it. Do not re-install grub each time, but instead just mount boot partition and edited grub.cfg to point to appropriate kernel and initrd, and pass in real_root and mdadm / lvm params to your kernels (or whatever is appropriate for the distro). Of course, you have to make sure that each distro has appropriate RAID/LVM drivers installed in the initrd. (In Gentoo, e.g., you tell genkernel to build the initramfs with RAID/LVM support, and then then you pass a /dev/mapper/<lv-name> argument as a kernel parameter, which the initramfs uses.)
Frankly, though, if this is just for testing: Why bother with RAID at all? Or maybe just put your /data on RAID, but leave the other ones to simple partitions or to LVM?
[Extra note: Also be aware that your boot partition and bootloader don't actually have to be on a hard disk. They can be on USB or CD-ROM.]