Just rebooted my Debian server without properly checking that all the drives were properly connected...
Anyway, I'm running a raid5 with mdraid, so the drive is now showing as "removed" -
Code:
# mdadm --detail --scan /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Feb 20 15:30:24 2017
Raid Level : raid5
Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
Raid Devices : 5
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Mar 6 22:18:44 2017
State : clean, degraded
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : ana-nas:0 (local to host ana-nas)
UUID : 1c96f3e6:d996243e:91fa4af1:636f787f
Events : 5672
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
4 0 0 4 removed
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]
15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
unused devices: <none>
I reconnected the drive and re-added it
Code:
# mdadm --add /dev/md0 /dev/sdf
mdadm: added /dev/sdf
But now the drive shows as device number 5 instead of 4. Is this the intended behaviour? Is there any way to get /dev/sdf to show as device 4 again to satisfy my OCD?
Code:
# mdadm --detail --scan /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Feb 20 15:30:24 2017
Raid Level : raid5
Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Mon Mar 6 22:29:03 2017
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 0% complete
Name : ana-nas:0 (local to host ana-nas)
UUID : 1c96f3e6:d996243e:91fa4af1:636f787f
Events : 5972
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
5 8 80 4 spare rebuilding /dev/sdf
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdf[5] sdb[0] sde[3] sdd[2] sdc[1]
15627548672 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]
[>....................] recovery = 0.0% (2420676/3906887168) finish=456.9min speed=142392K/sec