another RAID repair question
I set up a RAID-1 array a month or so ago, on brand new 300G drives. One of them failed - probably infant-mortality. Unfortunately, my RAID didnt handle it too well & now I'm trying to get it all going again.
I had the disks set up in 4 partitions: /boot, /, /var, and swap. I had /dev/md0 thru /dev/md3. Was working great.
After the failure of the one disk, only /dev/md2 comes up. /dev/md0, etc complain that there is "no device" (or something like that). The partitions in the RAID were /dev/md0: /dev/hda1, /dev/hdc1, etc. The partitions were also set up as type fd.
I was able to get it all working after removing the bad disk by changing /etc/fstab to mount the partitions as non-RAID devices.
What I'm looking for is some hints as to when I try to start RAID on one of the partitions, it works, but on 3 of the partitions RAID doesnt "see" the partitions. I know the partitions are basically OK, cause I'm using them - that is, the data is all there.
I suspect that when I add a new good disc, its only going to work for one of the partitions.
Any hints? - Thanks for any thoughts.