Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I must have read 150 howtos,faqs,docs and postings, but I really have made no progress. Please help me boot my system directly into RAID.
System is a Debian box, running kernel 2.6.4. I'm trying to convert the two IDE hard disks to run as a mirrored RAID1 array, attached to the onboard PROMISE fasttrak controller. Despite the PROMISE controller, this is effectively a Software RAID solution, as the kernel just sees two IDE devices. I want to mirror all the partitions, although I'd be happy to let /boot be separate if necessary.
On boot, I get the following error:
md: Autodetecting RAID arrays.
md: autorun ...
md: ... autorun DONE.
...
...
VFS: Cannot open root device "905" or md5
Please append a correct "root=" boot option
Kernel panic: VFS: unable to mount root fs on md5
So, no raid arrays are detected. The partitions have hex code 'fd' (linux raid autodetect).
Each partition boot,root,usr and var is mirrored exactly on the other disk. This is all fine, and under KNOPPIX, I can use mdadm to set up the arrays, mount them, and everything works as expected. My kernel has everything appropriate compiled in: md, raid1, ext2, ext3 and so on, and I boot it directly, not via initrd.
An excerpt from my lilo.conf is at the moment (although I've made many changes):
lba32
boot=/dev/md2
raid-extra-boot=/dev/hde,/dev/hdg
root=/dev/md5
lilo runs successfully, but evidently, the raid device isn't available at boot time on device major number 9. I didn't assign md0 or md1: is this the problem? I matched the mdx numbers to the original partition numbers.
Please can anyone point me in the right direction?
I had a hard time figuring out why I couldn't specify an md device in grub as a root device. It turned out that for some reason my kernel does not automatically recognize the md partitions as possible candidates for a raid array rebuild.
I found the issue was that the partition type should be hex: "FD" (Linux RAID autodetect)
Also worth remembering that you must format the arrayed partition AFTER assembling it. You can't format to ext2/3 partitions then raid 1 them together.
Finally, it was fiddly to set up lilo to work correctly when mid-migration to RAID 1. I found that the
raid-extra-boot=/dev/hda,/dev/hdb
line failed if all of the arrays were degraded and on hdb only. simple to solve by adding a partition to the array from hda. However, it is wise to boot up into the degraded arrays on your second drive, so if it goes tits up then you can resume (via knoppix) on hda with the original data. If you trust your backups, then you can do it all in one go:
1) mirror partion table onto second drive (dd and sfdisk for extended partitions)
2) set partition types on second drive to FD
2) create raid array using "missing" instead of /dev/hdaX
3) format the raid partitions ( mkfs.ext2 /dev/mdX )
4) cp -ax each of your original partitions to the analogous raid partion. You don't have to have md0, md1,md2. I use mdX where X is the same number as the original partition(s).
5) init 1
6) unmount var, usr home and whatever else, then remount the RAID partitions in their place
6b) (this is the brave bit) add the hdaX partitions to the array. This is where your original data could be lost. Once added, they will sync (please wait). Root can't be done straight away because you can't remount to a different device, so leave it degraded. DO cp -ax /* /mnt/newrootmdX (-x keeps within the original partition/fs)
7) update /etc/fstab to replace hda/bX for mdX as appropriate
8) update lilo.conf using mdX for the boot partition, and mdY for the root.
9) lilo
10) i had some issue which meant I had to use lilo -A /dev/hd(a|b) , lilo -M, /dev/hd(a|b) but it all worked in the end
11) change partition type of hda raid partitions to FD
12) should be able to reboot, if kernel has compiled in RAID (and device manager stuff needed to run lilo, at least in 2.6 kernel)
13) finally, add hdaX to the degraded root partition
14) should be in full working state now. try bonnie++ to benchmark the array...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.