It's more likely to come across a bad sector because it's larger, basically. The fact that md0 is still in one piece seems to indicate your disk as such is still working (spinning, delivering data) but you might want to try reading/writing from/to md0 a bit too, if it is mounted.
It even happens that the raid is considered broken because there was a harmless disk timeout, that happens too. An example from my raid set:
Code:
# cat /proc/mdstat
Personalities : [raid1]
..
md0 : active raid1 sda1[0] sdb1[1]
96256 blocks [2/2] [UU]
unused devices: <none>
Now I set sdb1 to forced faulty
Code:
# mdadm --manage --fail /dev/md0 /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
# cat /proc/mdstat
Personalities : [raid1]
..
md0 : active raid1 sda1[0] sdb1[2](F)
96256 blocks [2/1] [U_]
unused devices: <none>
This would be a typical situation after hardware failure, or sometimes a more harmless problem such as a timeout. In your case it isn't marked faulty, more like after
Code:
# mdadm --manage --remove /dev/md0 /dev/sdb1
mdadm: hot removed /dev/sdb1
# cat /proc/mdstat
Personalities : [raid1]
..
md0 : active raid1 sda1[0]
96256 blocks [2/1] [U_]
unused devices: <none>
So what you could try is
Code:
# mdadm --manage --add /dev/md0 /dev/sdb1
mdadm: re-added /dev/sdb1
# cat /proc/mdstat
Personalities : [raid1]
..
md0 : active raid1 sdb1[2] sda1[0]
96256 blocks [2/1] [U_]
[===========>.........] recovery = 58.9% (57920/96256) finish=0.0min speed=28960K/sec
unused devices: <none>
It should do a resync, which can take quite long if you have a large partition, even longer when the machine is also very busy. If this doesn't work, or if in a couple of days your RAID set fails again, you really do need to replace the disk. A kind soul wrote up a howto for that at
http://www.howtoforge.com/replacing_..._a_raid1_array
-Bert