I have had a drive fail in my RAID(1) array on my home server and I'm wanting to get the system back up temporarily while I obtain a replacement.
Unfortunately two of my RAID partitions have not come back. I have done an "mdadm --examine --scan" and this now shows something like this:
Quote:
...
ARRAY /dev/md/2 metadata=1.2 UUID=c82af915:d1a84066:bbc4cbf5:b347b05b name=Microknoppix:2
ARRAY /dev/md3 UUID=6dff6efd:a8f89543:1e12c7ee:6e1c64bf
ARRAY /dev/md4 UUID=6166d42f:4711b973:1e12c7ee:6e1c64bf
ARRAY /dev/md/5 metadata=1.2 UUID=8bb05937:c501ba5d:f30f52e7:be86dc9a name=slackware:5
|
Partitions /dev/md2 and /dev/md5 are not mounted and appear to have been renamed /dev/md/2 and /dev/md/5.
The underlying partitions were/are simply /dev/sda2, /dev/sdb2 etc.
Is there some way of forcing the /dev/md2 array to be created on boot or do I need to create a new array using the updated "/dev/md/2" terminology ?
(Note: I'm OK that this runs degraded in the short term.)