help with mdadm disk failure
I previously created a raid 5 from 4 disks using mdadm (sd[e-h]1)
After using it for awhile, one if the disk fails (mdadm report failure). The disk disappear when using "$fdisk -l"
My OS is redhat EL4, kernel 2.6.9-22
I restart the machine and call fdisk again, no suprise, the disk is back.
I am sure that the drive is good, but somehow mdadm does not like the drive. How do I fix this problem?
[root@ss1 /]# mdadm --assemble /dev/md1 /dev/sd[e-h]1
The message is "not enough to start the array", which presumably means the drive that failed has unique data on it. You will need to rebuild an array and restore from backup.
You can 'add' a device to an array to replace a faulty gizmo, but if that drive had unique data (such as when you do plain striping), then this operation is impossible really - you need to wipe and rebuild.
I would interpret that message differently.
In a RAID 5 array you will need a minimum of three active disks.
your md1 that is built with 2 drives and a spare does not meet this minimum requirement. You would need 3 drives and a spare or 4 drives to have a vail RAID 5.
I would try to investigate why mdadm is only seeing three of the four drives.
You should be fine any way you slice it if you only lost one drive in RAID5 after all that is the point of RAID 5. the parity data written to the other drives is enough to rebuild a single drive of data.
Anyway just putting in my 2 cents.
Please post back with any developments :)
I am not sure either why mdadm see only 3 drives. I checked each drive by reformatting it one by one, then mount it sucessfully.
I re-create the RIAD 5 again and it works now.
The problem has been happening many time and i have to re-create the RAID 5 with mdadm. I am not sure if the problem is the disk itself.
|All times are GMT -5. The time now is 10:58 PM.|