Hi Everyone, I have a RAID 5 array set up using linux RAID. The array consists of 6 x 1TB drives with no spares. The array has just gone down with a failed disk which I can replace but it is also reporting one of the disks as spare which worries me. I am not very familliar with linux RAID and would appreciate some help so I don't distroy all the data on it by making a wrong move.
I've googled some commands to try and troubleshoot the problem, the output is listed below;
Code:
[root@san1 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 6 10:01:47 2010
Raid Level : raid5
Array Size : 4883642880 (4657.40 GiB 5000.85 GB)
Used Dev Size : 976728576 (931.48 GiB 1000.17 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Nov 11 21:18:46 2011
State : clean, degraded
Active Devices : 4
Working Devices : 5
Failed Devices : 1
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 4589a5f7:25dce26a:d0119310:03eb1348
Events : 0.2326158
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 1 removed
2 0 0 2 removed
3 8 49 3 active sync /dev/sdd1
4 8 81 4 active sync /dev/sdf1
5 8 65 5 active sync /dev/sde1
6 8 33 - faulty spare /dev/sdc1
7 8 17 - spare /dev/sdb1
You have new mail in /var/spool/mail/root
When I try to restart the array the faulty disk (sdc1) comes back online but one of the other disks (sdb1) is listed as spare and tries to resync.
Code:
[root@san1 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdf1[4] sde1[5] sdd1[3] sdc1[2] sdb1[6] sda1[0]
4883642880 blocks level 5, 64k chunk, algorithm 2 [6/5] [U_UUUU]
[>....................] recovery = 4.2% (41443840/976728576) finish=258.2min speed=60352
K/sec
[root@san1 ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Fri Aug 6 10:01:47 2010
Raid Level : raid5
Array Size : 4883642880 (4657.40 GiB 5000.85 GB)
Used Dev Size : 976728576 (931.48 GiB 1000.17 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Nov 11 18:09:33 2011
State : clean, degraded, recovering
Active Devices : 5
Working Devices : 6
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
Rebuild Status : 5% complete
UUID : 4589a5f7:25dce26a:d0119310:03eb1348
Events : 0.2326144
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
6 8 17 1 spare rebuilding /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 8 81 4 active sync /dev/sdf1
5 8 65 5 active sync /dev/sde1
The array goes offline as soon as sdb1 completes resyncing. Can anyone please advise me of the best course of action here. Can I replace sdc1 with a working drive or do I have to deal with sdb1 forst?
Thanks in advance,
Smallgreen