I have 3 md raid5 arrays, partially sharing the same drives. (due to mismatching disk size)
I recently changed a couple of cables for longer ones. I might have switched a couple of the drive connectors around, but I have sorted that out now.
I have this setup (from /proc/mdstat):
md2 : inactive sdg1[2] sdf1[3] sdb3[1] sdd3[5](S) sdc1[0]
73255808 blocks
md1 : active raid5 sde1[0] sdd1[4] sdf3[3] sdg3[2] sdb1[1]
781433344 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md0 : active raid5 sdc3[0] sdd2[4] sdf2[3] sdg2[2] sdb2[1]
97642752 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
as you can see, md0 and md1 are fine. md2, on the other hand, refuses to cooperate. apparently, the array is dirty, but how can I clean it? it is also degraded, so I cannot mount it, remove the data, then wipe and recreate the array from scratch. sdd3 which is marked as a spare, is supposed to be the 5th device in the array.
some relevant data:
Code:
mdadm -D /dev/md2
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
/dev/md2:
Version : 00.90
Creation Time : Thu May 1 22:41:34 2008
Raid Level : raid5
Used Dev Size : 14651136 (13.97 GiB 15.00 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Mon Jan 5 15:39:03 2009
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 64K
UUID : 8c00dfaf:a414eba5:fa99d161:76122a73
Events : 0.853386
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 19 1 active sync /dev/sdb3
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 8 51 - spare /dev/sdd3
trying to start the array:
Code:
mdadm -R /dev/md2
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: metadata format 00.90 unknown, ignored.
mdadm: failed to run array /dev/md2: Input/output error
dmesg output:
Code:
[60658.635891] raid5: device sdg1 operational as raid disk 2
[60658.635901] raid5: device sdf1 operational as raid disk 3
[60658.635906] raid5: device sdb3 operational as raid disk 1
[60658.635911] raid5: device sdc1 operational as raid disk 0
[60658.635916] raid5: cannot start dirty degraded array for md2
[60658.635921] RAID5 conf printout:
[60658.635924] --- rd:5 wd:4
[60658.635928] disk 0, o:1, dev:sdc1
[60658.635931] disk 1, o:1, dev:sdb3
[60658.635935] disk 2, o:1, dev:sdg1
[60658.635938] disk 3, o:1, dev:sdf1
[60658.635942] raid5: failed to run raid set md2
[60658.635945] md: pers->run() failed ...
My problem is probably not that there is something wrong with sdd, as the other two arrays run fine with a section on that drive.
to me it looks like I have almost a "chicken-or-the-egg" type problem, I can't start the array before it is clean, and it seems it has to be started in order to start the cleaning process.
I was able to mount the array at one point, but umounted it to add the missing section(sdd3), which was impossible while the array was mounted. after I tried that, I have been unable to do anything useful with it.