Hi,
I've got a simple software RAID1 configuration (2x250gb disks) running on Centos 5 where one of the devices appears to have failed.
Whilst the system is continuing to run, the ext3 filesystem on the raid device has been placed into read-only mode. Is this normal behaviour? As one of the devices is still operational I can't see why this should have happened. Can anyone explain what may have caused the OS to do this?
Output from /proc/mdstat is:-
Code:
Personalities : [raid1]
md1 : active raid1 sdb2[2](F) sda2[0]
16386176 blocks [2/1] [U_]
md0 : active raid1 sdb1[1] sda1[2](F)
227753408 blocks [2/1] [_U]
Output from mdadm --detail /dev/md0 is:-
Code:
/dev/md0:
Version : 0.90
Creation Time : Tue Apr 20 18:20:16 2010
Raid Level : raid1
Array Size : 227753408 (217.20 GiB 233.22 GB)
Used Dev Size : 227753408 (217.20 GiB 233.22 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Oct 24 04:22:01 2010
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
UUID : a6b83643:464171e0:476dc2f4:af1f75d0
Events : 0.3904734
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 17 1 active sync /dev/sdb1
2 8 1 - faulty spare /dev/sda1
Thanks