Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
I set up a RAID-1 array a month or so ago, on brand new 300G drives. One of them failed - probably infant-mortality. Unfortunately, my RAID didnt handle it too well & now I'm trying to get it all going again.
I had the disks set up in 4 partitions: /boot, /, /var, and swap. I had /dev/md0 thru /dev/md3. Was working great.
After the failure of the one disk, only /dev/md2 comes up. /dev/md0, etc complain that there is "no device" (or something like that). The partitions in the RAID were /dev/md0: /dev/hda1, /dev/hdc1, etc. The partitions were also set up as type fd.
I was able to get it all working after removing the bad disk by changing /etc/fstab to mount the partitions as non-RAID devices.
What I'm looking for is some hints as to when I try to start RAID on one of the partitions, it works, but on 3 of the partitions RAID doesnt "see" the partitions. I know the partitions are basically OK, cause I'm using them - that is, the data is all there.
I suspect that when I add a new good disc, its only going to work for one of the partitions.
We ran into issues on one of our old RH 7.3 boxes and finally opted just to reinstall it with RHEL 4 since that was a wish list to do anyway. After doing the install I found issues with the RAID setup again which led me to the following link. Essentially it tells you how to setup grub so that it understands your RAID setup after the fact.