LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - General (https://www.linuxquestions.org/questions/linux-general-1/)
-   -   md: kicking non-fresh sda6 from array! (https://www.linuxquestions.org/questions/linux-general-1/md-kicking-non-fresh-sda6-from-array-416853/)

felixgonschorek 06-03-2013 12:52 PM

Thanks
 
Same here - topic is still "hot" :-)

Thanks for helping out

elcattivo 02-26-2014 02:30 PM

Use the --force, admin
 
Hi!

Today I ran in the same problem, but it presented a bit different.
My /proc/mdstat looked like this:

Code:

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sdc[4] sdd1[3] sdb[5]
      5860538097 blocks super 1.2
     
unused devices: <none>

And i was unable to remove/fail the device specified in the "non-fresh" line.

So I had to manually assemble the raid with:

mdadm --assemble /dev/md0 /dev/sdb /dev/sdc /dev/sdd1 /dev/sde --force

I tried it without --force a few times but it didn't work.
With --force it assembled the array and my /proc/mdstat looked like this:
Code:

more /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid6 sdc[4] sdd1[3] sdb[5]
      3907020800 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [_UUU]
     
unused devices: <none>

Also mdadm --detail /dev/md0 showed, that /dev/sde was already removed (?).
Code:

mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Fri Jun 24 19:29:50 2011
    Raid Level : raid6
    Array Size : 3907020800 (3726.03 GiB 4000.79 GB)
  Used Dev Size : 1953510400 (1863.01 GiB 2000.39 GB)
  Raid Devices : 4
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Mon Feb 24 03:27:03 2014
          State : clean, degraded
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

        Layout : left-symmetric
    Chunk Size : 512K

          Name : sarah:0  (local to host sarah)
          UUID : 1a148a7f:4d6a55ca:86bdec2e:c2b06689
        Events : 229601

    Number  Major  Minor  RaidDevice State
      0      0        0        0      removed
      4      8      32        1      active sync  /dev/sdc
      5      8      16        2      active sync  /dev/sdb
      3      8      49        3      active sync  /dev/sdd1

Then I was able to add /dev/sde again and now it's rebuilding.

Code:

# more /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sde[6] sdc[4] sdd1[3] sdb[5]
      3907020800 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [_UUU]
      [>....................]  recovery =  0.7% (14500864/1953510400) finish=1815.8min speed=17796K/sec
     
unused devices: <none>


bvrulez 08-10-2016 03:30 AM

This is still relevant. I use openmediavault with a Raid6 and four disks and probably had a power issue on shutdown. Or I pulled two SATA-cable out of the disks before proper shutdown. I re-arranged the SATA-cables and put in a second SATA-controller and after booting two of my 4 HDDs where removed. I rebuilt the Raid using one HDDs and after I did that the Raid was okay but the fourth HDD (which is a spare in this setup) still was removed. Then I shutdown the server and on restart the same problem happened: two disks removed and just a Raid6 of 2 disks. I am currently rebuilding it again, but I am kinda sure this wont work. So, I guess, after rebuilding the HDD is not in a state any more where I can use the code above to manually re-assemble it?

rainecc 10-16-2016 12:24 PM

Thanks, this thread saved me from lots of pain today. 10 years on!

kohly 01-30-2019 03:49 AM

Very useful!
 
Thank you!


All times are GMT -5. The time now is 11:46 PM.