Okay I have a copy of OpenMediaVault (Debian Based) that's been running without issue for over a year. It is a decent processor 32Gb Ram, 4x4Tb HD, 1 x SSD and a seperate Boot Hard Drive.
Recently due to a broken fan, one of the hard disks shut down and dropped out of the Raid (Raid 10 incase it makes a difference), I've replaced the fan, and got everything up and running again, however when I add the Hard Drive back into the raid, it starts the recovery then at 21% it stops trying to add the drive and just marks it as removed "mdadm -D /dev/md0" gives the following
Code:
root@fileserver:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Jun 14 20:06:24 2013
Raid Level : raid10
Array Size : 7814034432 (7452.04 GiB 8001.57 GB)
Used Dev Size : 3907017216 (3726.02 GiB 4000.79 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue Sep 16 10:56:15 2014
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : fileserver:0 (local to host fileserver)
UUID : 7e556cd4:f56c995e:68f72813:eeb2a61c
Events : 5024563
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 32 1 active sync /dev/sdc
2 0 0 2 removed
4 8 48 3 active sync /dev/sdd
Note the device is marked as removed, and not failed or spare.
Thinking the disk could have failed I ran badblocks which gave it a clean bill of health. So then I ran fdisk to remove and partition information, so it should effectively be a clean disk, and tried again. I get exactly the same results.
Anybody got any ideas, on repairing the raid to full stength?