LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 07-12-2017, 11:44 AM   #1
Gnewbee
LQ Newbie
 
Registered: Jan 2015
Posts: 7

Rep: Reputation: Disabled
mdadm raid10 assemble fail


Hi all,

I have tried to google this one but I can't seem to find an answer.
I have a raid10 with a failed drive and one that seems a bit out of date but with a similar number of events to the 2 good ones. I would like to reassemble it (even RO) to backup the data but nothing works. I have tried

Code:
sudo mdadm --assemble --run -o -vv --force /dev/md127 /dev/sdd1 /dev/sdf1 /dev/sdc1
and I get:

Code:
mdadm: looking for devices for /dev/md127
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: /dev/sdd1 is identified as a member of /dev/md127, slot 1.
mdadm: /dev/sdf1 is identified as a member of /dev/md127, slot 3.
mdadm: /dev/sdc1 is identified as a member of /dev/md127, slot 0.
mdadm: added /dev/sdd1 to /dev/md127 as 1
mdadm: no uptodate device for slot 4 of /dev/md127
mdadm: added /dev/sdf1 to /dev/md127 as 3 (possibly out of date)
mdadm: added /dev/sdc1 to /dev/md127 as 0
mdadm: failed to RUN_ARRAY /dev/md127: Input/output error
mdadm: Not enough devices to start the array.

Here is the output of
Code:
sudo mdadm --examine /dev/sd[c-f]1
:

Code:
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
           Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
     Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : 77bc4c0d:bf06805a:80c949d6:5f297478

    Update Time : Sun Jul  2 00:57:01 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 61ef5bf1 - correct
         Events : 5297

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
           Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
     Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : 05f19f85:63972191:d43f66c0:3e3df7bd

    Update Time : Sun Jul  2 00:57:01 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 4083ab58 - correct
         Events : 5297

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
           Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
     Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1960 sectors, after=816 sectors
          State : active
    Device UUID : f15b1bd0:0f36825f:f7eaeb59:06a8f229

    Update Time : Sun Feb  5 02:00:08 2017
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 5e2785f2 - correct
         Events : 4833

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : e1f1e05e:4d88b3ba:605c5318:3305a6c2
           Name : raidVolume1:5
  Creation Time : Fri Jan 13 14:04:24 2012
     Raid Level : raid10
   Raid Devices : 4

 Avail Dev Size : 2930273072 (1397.26 GiB 1500.30 GB)
     Array Size : 2930272256 (2794.53 GiB 3000.60 GB)
  Used Dev Size : 2930272256 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=816 sectors
          State : active
    Device UUID : 06d4dd09:d3e37bea:56c43770:2f4df042

    Update Time : Sun May  7 00:57:03 2017
       Checksum : 52949aef - correct
         Events : 5293

         Layout : near=2
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : AA.A ('A' == active, '.' == missing, 'R' == replacing)
The only option seems to recreate the array but since this is risky, I'd like to see if there's a safer option first.
Any help would be greatly appreciated.

Gnewbee
 
Old 07-12-2017, 02:14 PM   #2
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
It seems you did not fail out the failed HD first and then remove it from the array before stopping the array. So now you are trying to start the array without all the drives and it cannot start that way as evident by your output;

Code:
mdadm: Not enough devices to start the array.
You might try starting your array with all 4 disks and then fail out /dev/sde1 from the array like this;

Code:
mdadm --manage /dev/md127 --fail /dev/sde1
mdadm --manage /dev/md127 --remove /dev/sde1
Once you have a new drive you can added it like this;

Code:
mdadm --manage /dev/md127 --add /dev/sde1
 
Old 07-13-2017, 04:32 AM   #3
Gnewbee
LQ Newbie
 
Registered: Jan 2015
Posts: 7

Original Poster
Rep: Reputation: Disabled
Hi

The problem is I can't start it with any number of drives. I have also tried 4 drives
Code:
sudo mdadm --assemble --run -o -vv --force /dev/md5 /dev/sd[c-f]1
but I get:


Code:
mdadm: looking for devices for /dev/md5
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: UUID differs from /dev/md/0.
mdadm: /dev/sdc1 is identified as a member of /dev/md5, slot 0.
mdadm: /dev/sdd1 is identified as a member of /dev/md5, slot 1.
mdadm: /dev/sde1 is identified as a member of /dev/md5, slot 2.
mdadm: /dev/sdf1 is identified as a member of /dev/md5, slot 3.
mdadm: added /dev/sdd1 to /dev/md5 as 1
mdadm: added /dev/sde1 to /dev/md5 as 2 (possibly out of date)
mdadm: added /dev/sdf1 to /dev/md5 as 3 (possibly out of date)
mdadm: added /dev/sdc1 to /dev/md5 as 0
mdadm: failed to RUN_ARRAY /dev/md5: Input/output error
mdadm: Not enough devices to start the array.
I suspect the problem is the possibly out of date comment but I already tried the force option.

Thanks in advance for any help

Gnewbee

Last edited by Gnewbee; 07-13-2017 at 04:34 AM.
 
Old 07-13-2017, 08:23 AM   #4
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
Wait your first post you were using md127 and now you are trying to use md5?

What does your mdadm.conf file have in it?
 
Old 07-13-2017, 08:59 AM   #5
Gnewbee
LQ Newbie
 
Registered: Jan 2015
Posts: 7

Original Poster
Rep: Reputation: Disabled
Well spotted. The 127 was what it was trying to auto-assign so I reused it before looking at my mdadm.conf file and seeing the 5.
Here's the content of the file:
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=2f536e6c:128e0beb:22021404:36cc8865 name=ubuntu:0
#ARRAY /dev/md/5  metadata=1.2 UUID=e1f1e05e:4d88b3ba:605c5318:3305a6c2 name=raidVolume1:5

# This file was auto-generated on Wed, 11 Jan 2017 12:49:28 +0000
# by mkconf $Id$
 
Old 07-13-2017, 12:36 PM   #6
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
You should also take notice that your array is /dev/md/0 not /dev/md5 according to this file.
 
Old 07-14-2017, 05:36 AM   #7
Gnewbee
LQ Newbie
 
Registered: Jan 2015
Posts: 7

Original Poster
Rep: Reputation: Disabled
Actually md0 is a raid1 volume for my root, home, ... which is still working fortunately
md5 is just data (which I'd still like to recover)
 
Old 07-14-2017, 06:40 AM   #8
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
OK, so why is it commented out then?

Have you tried:
Code:
mdadm --assemble --scan --force
 
Old 07-14-2017, 10:15 AM   #9
Gnewbee
LQ Newbie
 
Registered: Jan 2015
Posts: 7

Original Poster
Rep: Reputation: Disabled
md0 is there, md5 is commented out but I can't really remember why. It may be the ID changed and wouldn't mount like that so I commented. Truth is I really don't remember.

Is there no risk to mounted and healthy volumes (the root md0 in my case) to running:
Code:
mdadm --assemble --scan --force
Thanks

Gnewbee
 
  


Reply

Tags
mdadm, raid, ubuntu 16.04


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm raid10 with 6 drives or more thirdbird Linux - Newbie 2 02-09-2017 06:13 PM
mdadm RAID10 layout : near vs. far badkuk Linux - Software 1 07-14-2012 01:55 AM
mdadm reports RAID10 has layout near=2,far=1 badkuk Linux - Software 1 06-08-2012 02:36 AM
mdadm RAID10 failure(s) grimm26 Linux - Server 1 02-14-2011 02:32 PM
soft raid10 with mdadm with everything ONLY on raid Alkisx Ubuntu 3 03-01-2009 04:41 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 04:09 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration