Raid 5 array messing around
I have a server running a raid 5 array. 2 SATA and 2 IDE drives. for some resin when one of the IDE drives seems to have stopped working, that is ok, but the problem is now that the second drive on the IDE interface seems to have excluded its self form the array perusal when i do a mdadm --detail /dev/md0 i get
Code:
/dev/md0: Code:
/dev/hdc3: viperuk |
Probably something like
mdadm --fail /dev/md0 /dev/hdc3 mdadm --remove /dev/md0 /dev/hdc3 mdadm --add /dev/md0 /dev/hdc3 |
If the array has been stopped already then you might have to force it to assemble with:
mdadm --assemble --force /dev/md0 /dev/hdc3 /dev/sda3 /dev/sdb3 (note that I left out the other ide drive, not sure if that is what you want or not...) Regardless of how many drives you list here, forcing assemble will only assemble the array with 3 of the 4 drives so you will still have to add the other drive before it will begin to resync. Either way, after you are done with either adding the drive or assembling, you will have to fsck the filesystem to fix all of the errors induced by the broken array. With /dev/md0 NOT mounted: e2fsck -f -D /dev/md0 Make sure to only run that after you have the array working right! |
I got it resolved, I had to include all 4 drives in the --assemble --force as every time I attempted with only 3 it was coming up saying there weren't enough drives to start the array. When I did build saying to use all 4 drives it only used the 3 good drives and then I replaced the faulty drive and was then able to resync the whole array.
although before I got as far as completing the rebuild I took a copy of any data that was available from the 3 drives to ensure I had a current backup just in case things went down hill again. I didnt like the --force command is why I turned here for an answer but now I have a bit more confidants and more understanding of mdadm thank you for your help viperuk |
All times are GMT -5. The time now is 03:55 AM. |