I previously created a raid 5 from 4 disks using mdadm (sd[e-h]1)
After using it for awhile, one if the disk fails (mdadm report failure). The disk disappear when using "$fdisk -l"
My OS is redhat EL4, kernel 2.6.9-22
I restart the machine and call fdisk again, no suprise, the disk is back.
$fdisk -l
Code:
Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sde1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdf: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdf1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdg: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdg1 1 60801 488384001 fd Linux raid autodetect
Disk /dev/sdh: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdh1 1 60801 488384001 fd Linux raid autodetect
I then try to restart the RAID, but it failed.
I am sure that the drive is good, but somehow mdadm does not like the drive. How do I fix this problem?
[root@ss1 /]# mdadm --assemble /dev/md1 /dev/sd[e-h]1
Code:
mdadm: /dev/md1 assembled from 2 drives and 1 spare - not enough to start the array.
[root@ss1 /]# cat /proc/mdstat
Code:
Personalities :
md1 : inactive sde1[0] sdf1[4] sdh1[3] sdg1[2]
1953535744 blocks
unused devices: <none>