[SOLVED] reactivating raid after drive disconnect - all drives now listed as spares
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Yes, stopping first (even though already stopped) allowed it to (re)create. Note that initially I was attempting to assemble, and that was not working either. I had tried stopping it before, but obviously not in the correct order of things. After the successful (re)create, I broke it again (unplugged the other 16 drives) rebooted, failed and recovered it. Process/commands were:
* Unplugged the drives in the middle of a massive write operation - a couple thousand lines of errors scrolled by on the console, and in a SSH session the copy appeared to continue (?) even though the drive was broke. A while later, the server inexplicably rebooted on it's own (not sure why, but that's what it did).
* It would not boot up at that point because fstab was trying to load the array, which wasn't valid any more
* Commented the auto mount of the array in fstab, rebooted
* cat /proc/mdstat shows:
* Did a stop: mdadm --stop /dev/md0
* Recreated the array:
Code:
[root@hz16 ~]# mdadm --create --assume-clean --force /dev/md0 --level=10 --raid-devices=32 /dev/sd{b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac,ad,ae,af,ag}1
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid10 devices=32 ctime=Sat Feb 27 12:43:03 2016
... <snip> ...
mdadm: /dev/sdag1 appears to be part of a raid array:
level=raid10 devices=32 ctime=Sat Feb 27 12:43:03 2016
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
* Did a fsck and mounted and then the array was back in business.
That gives me a lot more confidence in the recovering from issues on the soft array. Of course there are backups, but my recovery above took all of 2 minutes. Restoring TBs of info would take much longer.
If you are still testing, you might try it again with the "--re-add" and see if it works then too.
The advantage would be if disk names got out of order. A "create" might play havoc with the recovery, though on a raid 10 it might not, I would expect a problem with a raid5.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.