LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   mdadm raid10 assemble fail (https://www.linuxquestions.org/questions/linux-software-2/mdadm-raid10-assemble-fail-4175609701/)

Gnewbee 07-12-2017 11:44 AM

mdadm raid10 assemble fail
 
Hi all,

I have tried to google this one but I can't seem to find an answer.
I have a raid10 with a failed drive and one that seems a bit out of date but with a similar number of events to the 2 good ones. I would like to reassemble it (even RO) to backup the data but nothing works. I have tried

Code:

sudo mdadm --assemble --run -o -vv --force /dev/md127 /dev/sdd1 /dev/sdf1 /dev/sdc1
and I get:

Code:

mdadm: looking for devices for /dev/md127
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: /dev/sdd1 is identified as a member of /dev/md127, slot 1.
mdadm: /dev/sdf1 is identified as a member of /dev/md127, slot 3.
mdadm: /dev/sdc1 is identified as a member of /dev/md127, slot 0.
mdadm: added /dev/sdd1 to /dev/md127 as 1
mdadm: no uptodate device for slot 4 of /dev/md127
mdadm: added /dev/sdf1 to /dev/md127 as 3 (possibly out of date)
mdadm: added /dev/sdc1 to /dev/md127 as 0
mdadm: failed to RUN_ARRAY /dev/md127: Input/output error
mdadm: Not enough devices to start the array.


Here is the output of
Code:

sudo mdadm --examine /dev/sd[c-f]1
:

Code:

/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
    Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
          Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
    Raid Level : raid10
  Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
    Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
  Super Offset : 8 sectors
  Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : 77bc4c0d:bf06805a:80c949d6:5f297478

    Update Time : Sun Jul  2 00:57:01 2017
  Bad Block Log : 512 entries available at offset 72 sectors
      Checksum : 61ef5bf1 - correct
        Events : 5297

        Layout : near=2
    Chunk Size : 512K

  Device Role : Active device 0
  Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
    Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
          Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
    Raid Level : raid10
  Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
    Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
  Super Offset : 8 sectors
  Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : 05f19f85:63972191:d43f66c0:3e3df7bd

    Update Time : Sun Jul  2 00:57:01 2017
  Bad Block Log : 512 entries available at offset 72 sectors
      Checksum : 4083ab58 - correct
        Events : 5297

        Layout : near=2
    Chunk Size : 512K

  Device Role : Active device 1
  Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
    Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
          Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
    Raid Level : raid10
  Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
    Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
  Super Offset : 8 sectors
  Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : f15b1bd0:0f36825f:f7eaeb59:06a8f229

    Update Time : Sun Feb  5 02:00:08 2017
  Bad Block Log : 512 entries available at offset 72 sectors
      Checksum : 5e2785f2 - correct
        Events : 4833

        Layout : near=2
    Chunk Size : 512K

  Device Role : Active device 2
  Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
    Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
          Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
    Raid Level : raid10
  Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
    Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
  Super Offset : 8 sectors
  Unused Space : before=1968 sectors, after=816 sectors
          State : active
    Device UUID : 06d4dd09:d3e37bea:56c43770:2f4df042

    Update Time : Sun May  7 00:57:03 2017
      Checksum : 52949aef - correct
        Events : 5293

        Layout : near=2
    Chunk Size : 512K

  Device Role : Active device 3
  Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)

The only option seems to recreate the array but since this is risky, I'd like to see if there's a safer option first.
Any help would be greatly appreciated.

Gnewbee

lazydog 07-12-2017 02:14 PM

It seems you did not fail out the failed HD first and then remove it from the array before stopping the array. So now you are trying to start the array without all the drives and it cannot start that way as evident by your output;

Code:

mdadm: Not enough devices to start the array.
You might try starting your array with all 4 disks and then fail out /dev/sde1 from the array like this;

Code:

mdadm --manage /dev/md127 --fail /dev/sde1
mdadm --manage /dev/md127 --remove /dev/sde1

Once you have a new drive you can added it like this;

Code:

mdadm --manage /dev/md127 --add /dev/sde1

Gnewbee 07-13-2017 04:32 AM

Hi

The problem is I can't start it with any number of drives. I have also tried 4 drives
Code:

sudo mdadm --assemble --run -o -vv --force /dev/md5 /dev/sd[c-f]1
but I get:


Code:

mdadm: looking for devices for /dev/md5
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: /dev/sdc1 is identified as a member of /dev/md5, slot 0.
mdadm: /dev/sdd1 is identified as a member of /dev/md5, slot 1.
mdadm: /dev/sde1 is identified as a member of /dev/md5, slot 2.
mdadm: /dev/sdf1 is identified as a member of /dev/md5, slot 3.
mdadm: added /dev/sdd1 to /dev/md5 as 1
mdadm: added /dev/sde1 to /dev/md5 as 2 (possibly out of date)
mdadm: added /dev/sdf1 to /dev/md5 as 3 (possibly out of date)
mdadm: added /dev/sdc1 to /dev/md5 as 0
mdadm: failed to RUN_ARRAY /dev/md5: Input/output error
mdadm: Not enough devices to start the array.

I suspect the problem is the possibly out of date comment but I already tried the force option.

Thanks in advance for any help

Gnewbee

lazydog 07-13-2017 08:23 AM

Wait your first post you were using md127 and now you are trying to use md5?

What does your mdadm.conf file have in it?

Gnewbee 07-13-2017 08:59 AM

Well spotted. The 127 was what it was trying to auto-assign so I reused it before looking at my mdadm.conf file and seeing the 5.
Here's the content of the file:
Code:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=2f536e6c:128e0beb:22021404:36cc8865 name=ubuntu:0
#ARRAY /dev/md/5  metadata=1.2 UUID=e1f1e05e:4d88b3ba:605c5318:3305a6c2 name=raidVolume1:5

# This file was auto-generated on Wed, 11 Jan 2017 12:49:28 +0000
# by mkconf $Id$


lazydog 07-13-2017 12:36 PM

You should also take notice that your array is /dev/md/0 not /dev/md5 according to this file.

Gnewbee 07-14-2017 05:36 AM

Actually md0 is a raid1 volume for my root, home, ... which is still working fortunately
md5 is just data (which I'd still like to recover)

lazydog 07-14-2017 06:40 AM

OK, so why is it commented out then?

Have you tried:
Code:

mdadm --assemble --scan --force

Gnewbee 07-14-2017 10:15 AM

md0 is there, md5 is commented out but I can't really remember why. It may be the ID changed and wouldn't mount like that so I commented. Truth is I really don't remember.

Is there no risk to mounted and healthy volumes (the root md0 in my case) to running:
Code:

mdadm --assemble --scan --force
Thanks

Gnewbee


All times are GMT -5. The time now is 11:33 AM.