I powered down my file server this morning and started it back up a little while ago. One of the drive in a raid 6 array hung the system if it's inserted so after a little tomfoolery to identify the trouble drive. After removing it I finally have it booting but the raid array come up as inactive and all spares.
I've "mdadm --examined" all the remaining drive in the drive in the raid6 array and the "events" are all the same, "2085124".
When I assemble the array I get
Code:
mdadm --assemble /dev/md1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdg1
mdadm: /dev/md1 assembled from 6 drives and 1 spare - not enough to start the array while not clean - consider --force.
Can the array be safely started using the --force option?
Any assitance would be greatly appreciated.
Below is the details of one of the drives.
Code:
/dev/sdd1:
Magic : a92b4efc
Version : 00.90.00
UUID : 08ad285e:ba1e0d5a:38cad0b1:c61c5ee7
Creation Time : Wed May 20 14:29:17 2009
Raid Level : raid6
Used Dev Size : 976759936 (931.51 GiB 1000.20 GB)
Array Size : 4883799680 (4657.55 GiB 5001.01 GB)
Raid Devices : 7
Total Devices : 8
Preferred Minor : 1
Update Time : Tue Feb 16 15:14:50 2010
State : active
Active Devices : 7
Working Devices : 8
Failed Devices : 0
Spare Devices : 1
Checksum : 3ae60447 - correct
Events : 2085124
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 49 3 active sync /dev/sdd1
0 0 8 145 0 active sync /dev/sdj1
1 1 8 129 1 active sync /dev/sdi1
2 2 8 65 2 active sync /dev/sde1
3 3 8 49 3 active sync /dev/sdd1
4 4 8 113 4 active sync /dev/sdh1
5 5 8 161 5 active sync
6 6 8 97 6 active sync /dev/sdg1
7 7 8 81 7 spare /dev/sdf1