LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 01-05-2015, 05:45 PM   #1
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Rep: Reputation: 1
Exclamation mdadm RAID6 "active" with spares and failed disks; need help


I'm in a bit of a pickle here. Could really use some advice.
NOTE: there's a bit of "history" to this; if you want to see "what happened before" see http://ubuntuforums.org/showthread.php?t=2259563; but otherwise I'm trying to focus this discussion to the current issue

I have a RAID6 mdadm device (/dev/md2000) running on Ubuntu 12.04 w/ mdadm 3.2.5.

Had a disk fail recently, then another one dropped out. So the failed disk I left out, and rebuilt by adding the failed disk. (mdadm /dev/md2000 --add /dev/sdX)

Today it spent all day adding that 7th drive (/dev/sdm1) (out of 8); and now it's in a crazy situation I've never seen before:
http://paste.ubuntu.com/9679403/

Code:
Every 1.0s: cat /proc/mdstat                                                                             Mon Jan  5 18:44:51 2015

Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md2000 : active raid6 sdm1[8](S) sdo1[3] sdi1[4] sdn1[2] sdk1[0](F) sdl1[6] sdp1[7]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/5] [__UU_UUU]

(snip)

unused devices: <none>
What should I do?

Last edited by fermulator; 01-09-2015 at 08:19 AM.
 
Old 01-06-2015, 04:35 AM   #2
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 7.7 (?), Centos 8.1
Posts: 17,790

Rep: Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538Reputation: 2538
Try 'mdadm --detail ...'

It looks like you might(?) have 3 dead drives there; RAID6 can only handle 2 https://en.wikipedia.org/wiki/RAID#Standard_levels
 
Old 01-06-2015, 05:40 AM   #3
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Here's what detail shows.
Code:
fermulator@fermmy-server:~$ sudo mdadm --detail /dev/md2000
/dev/md2000:
        Version : 1.1
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 1953513408 (1863.02 GiB 2000.40 GB)
   Raid Devices : 8
  Total Devices : 7
    Persistence : Superblock is persistent

    Update Time : Mon Jan  5 18:41:09 2015
          State : clean, FAILED 
 Active Devices : 5
Working Devices : 6
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : fermmy-server:2000  (local to host fermmy-server)
           UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
         Events : 42965

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8      209        2      active sync   /dev/sdn1
       3       8      225        3      active sync   /dev/sdo1
       4       0        0        4      removed
       4       8      129        5      active sync   /dev/sdi1
       7       8      241        6      active sync   /dev/sdp1
       6       8      177        7      active sync   /dev/sdl1

       0       8      161        -      faulty spare   /dev/sdk1
       8       8      193        -      spare   /dev/sdm1
Device roles:
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdk1 | grep Role
   Device Role : Active device 0
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdm1 | grep Role
   Device Role : spare
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdn1 | grep Role
   Device Role : Active device 2
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdo1 | grep Role
   Device Role : Active device 3
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdj1 | grep Role
   Device Role : Active device 4
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdi1 | grep Role
   Device Role : Active device 5
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdp1 | grep Role
   Device Role : Active device 6
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdl1 | grep Role
   Device Role : Active device 7

Last edited by fermulator; 01-06-2015 at 06:27 AM.
 
Old 01-06-2015, 05:47 AM   #4
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
When drive kept "dropping out", I was using the '--add' command (in manage mode) to get them back into the array. For example, when /dev/sdm1 was dropped, I did
Code:
sudo mdadm /dev/md2000 --add /dev/sdm1
mdadm man page shows:
Code:
       -a, --add
              hot-add listed devices.  If a device appears to have recently been part  of
              the  array  (possibly  it  failed or was removed) the device is re-added as
              described in the next point.  If that fails or the device was never part of
              the  array,  the device is added as a hot-spare.  If the array is degraded,
              it will immediately start to rebuild data onto that spare.

              Note that this and the following options are only meaningful on array  with
              redundancy.  They don't apply to RAID0 or Linear.
as it says above, teh "add" command to a previous member should action using the "re-add"

Code:
       --re-add
              re-add  a  device that was previous removed from an array.  If the metadata
              on the device reports that it is a member of the array, and the  slot  that
              it used is still vacant, then the device will be added back to the array in
              the same position.  This will normally cause the data for that device to be
              recovered.   However  based  on the event count on the device, the recovery
              may only require sections that are flagged  a  write-intent  bitmap  to  be
              recovered or may not require any recovery at all.

              When used on an array that has no metadata (i.e. it was built with --build)
              it will be assumed that bitmap-based recovery is enough to make the  device
              fully consistent with the array.

              When  --re-add can be accompanied by --update=devicesize.  See the descrip‐
              tion of this option when used in Assemble mode for an  explanation  of  its
              use.

              If  the device name given is missing then mdadm will try to find any device
              that looks like it should be part of the array but isn't and  will  try  to
              re-add all such devices.
Isn't this correct????

It's showing some drives as spares now, which seems insane to me. The drive WAS part of the array previous (in fact this same array) ... so it should have been re-incorporated into the array, not as a spare...

Last edited by fermulator; 01-06-2015 at 06:06 AM.
 
Old 01-06-2015, 06:00 AM   #5
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Here are the metadatas for the drives that have been relegated to spare:

Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdk1
/dev/sdk1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : c4987964:6bffbd66:1edbb518:5c9a2e52

    Update Time : Mon Jan  5 18:33:51 2015
       Checksum : dd489139 - correct
         Events : 42954

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AAA ('A' == active, '.' == missing)
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdm1
/dev/sdm1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
    Data Offset : 304 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 1334cde9:3a041fad:0b745c2b:979875c0

    Update Time : Tue Jan  6 06:38:58 2015
       Checksum : 29ed7285 - correct
         Events : 42967

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : spare
   Array State : ..AA.AAA ('A' == active, '.' == missing)

And, the /dev/sdj drive is the ''original'' fault. Which in all liklihood has REAL hardware failure.
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --examine /dev/sdj1
/dev/sdj1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x2
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
    Data Offset : 304 sectors
   Super Offset : 0 sectors
Recovery Offset : 2441891840 sectors
          State : clean
    Device UUID : eee3ae0e:f594fdba:58e19113:bc196464

    Update Time : Mon Jan  5 00:30:41 2015
       Checksum : 7a5a498d - correct
         Events : 42912

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : A.AAAAAA ('A' == active, '.' == missing)

Last edited by fermulator; 01-06-2015 at 06:06 AM.
 
Old 01-06-2015, 06:40 AM   #6
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
I was on IRC and someone suggested I try to re-assemble the array. I did;
Code:
$ sudo mdadm --stop /dev/md2000
$ sudo mdadm --assemble /dev/md2000 /dev/sdn1 /dev/sdo1 /dev/sdi1 /dev/sdp1 /dev/sdl1 /dev/sdk1 /dev/sdm1
mdadm: /dev/md2000 assembled from 5 drives and 1 spare - not enough to start the array
Now it's showing
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --detail /dev/md2000
mdadm: md device /dev/md2000 does not appear to be active.

md2000 : inactive sdn1[2](S) sdm1[8](S) sdl1[6](S) sdp1[7](S) sdi1[4](S) sdo1[3](S) sdk1[0](S)
      13674593976 blocks super 1.1
really scared - it says there are only 5 drives + 1 spare, but I'm counting 7 drives in that list!

So here's the state of the array, which I /think/ helps in understanding which drives to use to force a re-assembly, (assuming clean)
Code:
fermulator@fermmy-server:~$ sudo mdadm -E /dev/sd[nmlpiokj]1 | egrep 'Event|/dev'
/dev/sdi1:
         Events : 42971
/dev/sdj1:
         Events : 42912
/dev/sdk1:
         Events : 42954
/dev/sdl1:
         Events : 42971
/dev/sdm1:
         Events : 42971
/dev/sdn1:
         Events : 42971
/dev/sdo1:
         Events : 42971
/dev/sdp1:
         Events : 42971
Code:
fermulator@fermmy-server:~$ sudo mdadm -E /dev/sd[nmlpiokj]1 
/dev/sdi1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : c5d5e082:12406beb:621b6ea1:666a804f

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : 669d116 - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 5
   Array State : ..AA.AAA ('A' == active, '.' == missing)
/dev/sdj1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x2
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
    Data Offset : 304 sectors
   Super Offset : 0 sectors
Recovery Offset : 2441891840 sectors
          State : clean
    Device UUID : eee3ae0e:f594fdba:58e19113:bc196464

    Update Time : Mon Jan  5 00:30:41 2015
       Checksum : 7a5a498d - correct
         Events : 42912

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : A.AAAAAA ('A' == active, '.' == missing)
/dev/sdk1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : c4987964:6bffbd66:1edbb518:5c9a2e52

    Update Time : Mon Jan  5 18:33:51 2015
       Checksum : dd489139 - correct
         Events : 42954

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAAA.AAA ('A' == active, '.' == missing)
/dev/sdl1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 069ccef6:84cd9f55:fe6a56f4:94c37cd7

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : bf70cd95 - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : ..AA.AAA ('A' == active, '.' == missing)
/dev/sdm1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
    Data Offset : 304 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 1334cde9:3a041fad:0b745c2b:979875c0

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : 29ed7d2a - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : spare
   Array State : ..AA.AAA ('A' == active, '.' == missing)
/dev/sdn1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : d19cf49d:17b0d54a:ccc1022d:ee0e5a6f

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : 2c565316 - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : ..AA.AAA ('A' == active, '.' == missing)
/dev/sdo1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 846f3077:e3e98878:43ef5e88:0357b806

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : 25ffd522 - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : ..AA.AAA ('A' == active, '.' == missing)
/dev/sdp1:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 23ec0c55:3928e985:c8c64568:b9c9fbd3

    Update Time : Tue Jan  6 07:24:19 2015
       Checksum : be66da56 - correct
         Events : 42971

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : ..AA.AAA ('A' == active, '.' == missing)
What to do? I /think/ I need to force re-assembly. Should I use overlays as described by https://raid.wiki.kernel.org/index.p...software_RAID?

Last edited by fermulator; 01-06-2015 at 08:02 AM.
 
Old 01-06-2015, 09:13 PM   #7
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
After more reading, I was fairly sure this would work but it didn't take either:
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --assemble --force /dev/md2000 missing /dev/sdm1 /dev/sdn1 /dev/sdo1 missing /dev/sdi1 /dev/sdp1 /dev/sdl1
mdadm: cannot open device missing: No such file or directory
mdadm: missing has no superblock - assembly aborted
Ah, the only way to use "missing" keyword is with the creation mode:
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --create /dev/md2000 --force --level=6 --raid-devices=8 --metadata 1.1 missing /dev/sdm1 /dev/sdn1 /dev/sdo1 missing /dev/sdi1 /dev/sdp1 /dev/sdl1
mdadm: /dev/sdm1 appears to contain an ext2fs file system
    size=2147481472K  mtime=Fri Apr 22 01:00:32 2011
mdadm: /dev/sdm1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdn1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdo1 appears to contain an ext2fs file system
    size=1891628992K  mtime=Wed Apr 26 04:55:12 2028
mdadm: /dev/sdo1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdi1 appears to contain an ext2fs file system
    size=1953513560K  mtime=Fri Apr 22 00:56:28 2011
mdadm: /dev/sdi1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdp1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdl1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
Continue creating array?
Should I try it? ... it's weird that some devices are complaining about an ext2 filesystem ... sigh

Last edited by fermulator; 01-06-2015 at 09:22 PM.
 
Old 01-06-2015, 09:34 PM   #8
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
I just took a huge risk and did it;
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --create /dev/md2000 --assume-clean --force --level=6 --raid-devices=8 --chunk=64 --layout=left-symmetric --metadata=1.1 --name=fermmy-server:2000  missing /dev/sdm1 /dev/sdn1 /dev/sdo1 missing /dev/sdi1 /dev/sdp1 /dev/sdl1
mdadm: /dev/sdm1 appears to contain an ext2fs file system
    size=2147481472K  mtime=Fri Apr 22 01:00:32 2011
mdadm: /dev/sdm1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdn1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdo1 appears to contain an ext2fs file system
    size=1891628992K  mtime=Wed Apr 26 04:55:12 2028
mdadm: /dev/sdo1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdi1 appears to contain an ext2fs file system
    size=1953513560K  mtime=Fri Apr 22 00:56:28 2011
mdadm: /dev/sdi1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdp1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
mdadm: /dev/sdl1 appears to be part of a raid array:
    level=raid6 devices=8 ctime=Fri Apr 22 01:12:07 2011
Continue creating array? y
mdadm: array /dev/md2000 started.
So far, it looks "OK"
Code:
md2000 : active raid6 sdl1[7] sdp1[6] sdi1[5] sdo1[3] sdn1[2] sdm1[1]
      11720294016 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [_UUU_UUU]
Code:
fermulator@fermmy-server:/var/crash$ sudo mdadm --detail /dev/md2000
/dev/md2000:
        Version : 1.1
  Creation Time : Tue Jan  6 22:32:46 2015
     Raid Level : raid6
     Array Size : 11720294016 (11177.34 GiB 12001.58 GB)
  Used Dev Size : 1953382336 (1862.89 GiB 2000.26 GB)
   Raid Devices : 8
  Total Devices : 6
    Persistence : Superblock is persistent

    Update Time : Tue Jan  6 22:32:46 2015
          State : clean, degraded 
 Active Devices : 6
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : fermmy-server:2000  (local to host fermmy-server)
           UUID : 78850141:a97c6eda:d7324f6c:d40e1bfa
         Events : 0

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8      193        1      active sync   /dev/sdm1
       2       8      209        2      active sync   /dev/sdn1
       3       8      225        3      active sync   /dev/sdo1
       4       0        0        4      removed
       5       8      129        5      active sync   /dev/sdi1
       6       8      241        6      active sync   /dev/sdp1
       7       8      177        7      active sync   /dev/sdl1
This diff of mdadm detail LOOKS promising (compare it with the above)
Click image for larger version

Name:	Selection_415.png
Views:	22
Size:	98.5 KB
ID:	17293

As suggested by a helpful person on mdadm IRC (freenode), now to be safe we mark it as readonly before attempting to mount:
Code:
fermulator@fermmy-server:~$ sudo mdadm --readonly /dev/md2000
Now, it shows up as read-only:
Code:
md2000 : active (read-only) raid6 sdl1[7] sdp1[6] sdi1[5] sdo1[3] sdn1[2] sdm1[1]
      11720294016 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [_UUU_UUU]
Sadly though, the kernel log shows that the partition table on md2000 isn't correct
Code:
Jan  6 22:32:47 fermmy-server kernel: [137113.837366] md: bind<sdm1>
Jan  6 22:32:47 fermmy-server kernel: [137113.839806] md: bind<sdn1>
Jan  6 22:32:47 fermmy-server kernel: [137113.849995] md: bind<sdo1>
Jan  6 22:32:47 fermmy-server kernel: [137113.856951] md: bind<sdi1>
Jan  6 22:32:47 fermmy-server kernel: [137113.857526] md: bind<sdp1>
Jan  6 22:32:47 fermmy-server kernel: [137113.857750] md: bind<sdl1>
Jan  6 22:32:47 fermmy-server kernel: [137113.860294] md/raid:md2000: device sdl1 operational as raid disk 7
Jan  6 22:32:47 fermmy-server kernel: [137113.860298] md/raid:md2000: device sdp1 operational as raid disk 6
Jan  6 22:32:47 fermmy-server kernel: [137113.860300] md/raid:md2000: device sdi1 operational as raid disk 5
Jan  6 22:32:47 fermmy-server kernel: [137113.860303] md/raid:md2000: device sdo1 operational as raid disk 3
Jan  6 22:32:47 fermmy-server kernel: [137113.860305] md/raid:md2000: device sdn1 operational as raid disk 2
Jan  6 22:32:47 fermmy-server kernel: [137113.860308] md/raid:md2000: device sdm1 operational as raid disk 1
Jan  6 22:32:47 fermmy-server kernel: [137113.860947] md/raid:md2000: allocated 8384kB
Jan  6 22:32:47 fermmy-server kernel: [137113.861017] md/raid:md2000: raid level 6 active with 6 out of 8 devices, algorithm 2
Jan  6 22:32:47 fermmy-server kernel: [137113.861077] RAID conf printout:
Jan  6 22:32:47 fermmy-server kernel: [137113.861079]  --- level:6 rd:8 wd:6
Jan  6 22:32:47 fermmy-server kernel: [137113.861082]  disk 1, o:1, dev:sdm1
Jan  6 22:32:47 fermmy-server kernel: [137113.861084]  disk 2, o:1, dev:sdn1
Jan  6 22:32:47 fermmy-server kernel: [137113.861086]  disk 3, o:1, dev:sdo1
Jan  6 22:32:47 fermmy-server kernel: [137113.861089]  disk 5, o:1, dev:sdi1
Jan  6 22:32:47 fermmy-server kernel: [137113.861091]  disk 6, o:1, dev:sdp1
Jan  6 22:32:47 fermmy-server kernel: [137113.861093]  disk 7, o:1, dev:sdl1
Jan  6 22:32:47 fermmy-server kernel: [137113.861134] md2000: detected capacity change from 0 to 12001581072384
Jan  6 22:32:47 fermmy-server kernel: [137113.871705]  md2000: unknown partition table
I tried to mount it as ext4
Code:
fermulator@fermmy-server:/dev/disk/by-uuid$ sudo mount -t ext4 -o ro /dev/md2000 /media/arrays/md2000 
mount: wrong fs type, bad option, bad superblock on /dev/md2000,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so
But no luck
Code:
Jan  6 22:48:53 fermmy-server kernel: [138080.170164] EXT4-fs (md2000): VFS: Can't find ext4 filesystem

Last edited by fermulator; 01-06-2015 at 09:58 PM.
 
Old 01-06-2015, 10:22 PM   #9
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
I think I'm completely failed here. Looking more closely at the mdadm detail output, the "dev used size" is completely wrong! I had ~85% "used" of the total space last time I checked (remember seeing this in the motd)...

Yet here it is ... saying only 2Gb used? no way:
Code:
     Array Size : 11720294016 (11177.34 GiB 12001.58 GB)
  Used Dev Size : 1953382336 (1862.89 GiB 2000.26 GB)
 
Old 01-06-2015, 11:17 PM   #10
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Been doing more reading. Stumbled over a post about "ordering" getting messed up. Maybe my concept of the "correct order" is wrong? Luckily I have e-mail notifications setup so I could dig:

Here's the SUMMARY of "state device role transitions"
Code:
-------------------------------------------------------------------------------------------
|                     |                Device Role #
-------------------------------------------------------------------------------------------
|  DEVICE  | COMMENTS | Dec GOOD | Jan4 6:28AM | 12:10PM | 12:40PM | Jan5 12:30AM | 12:50AM | 8:30AM | 6:34PM | Jan6 6:45AM |
-------------------------------------------------------------------------------------------
| /dev/sdi |          |    4     |      4      |    4    |    4    |      4       |    4    |    4   |    4   |      4      |
| /dev/sdj | failing  |    5     |   5 FAIL    |   ( )   |    8    |      8       |  8 FAIL |   ( )  |   ( )  |     ( )     |
| /dev/sdk | failing? |    0     |      0      |    0    |    0    |      0       |    0    |    0   | 0 FAIL |   0 FAIL    |
| /dev/sdl |          |    6     |      6      |    6    |    6    |      6       |    6    |    6   |    6   |      6      |
| /dev/sdm |          |    1     |      1      |    1    |    1    |     ( )      |   ( )   |   ( )  |    8   |   8 SPARE   |
| /dev/sdn |          |    2     |      2      |    2    |    2    |      2       |    2    |    2   |    2   |      2      |
| /dev/sdo |          |    3     |      3      |    3    |    3    |      3       |    3    |    3   |    3   |      3      |
| /dev/sdp |          |    7     |      7      |    7    |    7    |      7       |    7    |    7   |    7   |      7      |
-------------------------------------------------------------------------------------------
All of the above comes from:

Code:
Dec GOOD
md2000 : active raid6 sdo1[3] sdj1[5] sdk1[0] sdi1[4] sdn1[2] sdm1[1] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]

FAIL EVENT on Jan 4th @ 6:28AM
md2000 : active raid6 sdo1[3] sdj1[5](F) sdk1[0] sdi1[4] sdn1[2] sdm1[1] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]
      [==============>......]  check = 73.6% (1439539228/1953513408) finish=536.6min speed=15960K/sec

DEGRADED EVENT on Jan 4th @ 6:39AM
md2000 : active raid6 sdo1[3] sdj1[5](F) sdk1[0] sdi1[4] sdn1[2] sdm1[1] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]
      [==============>......]  check = 73.6% (1439539228/1953513408) finish=5091.8min speed=1682K/sec

DEGRADED EVENT on Jan 4th @ 12:10PM
md2000 : active raid6 sdo1[3] sdn1[2] sdi1[4] sdm1[1] sdk1[0] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]

DEGRADED EVENT on Jan 4th @ 12:21PM
md2000 : active raid6 sdk1[0] sdo1[3] sdm1[1] sdn1[2] sdi1[4] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]

DEGRADED EVENT on Jan 4th  @ 12:40PM
md2000 : active raid6 sdj1[8] sdm1[1] sdo1[3] sdn1[2] sdk1[0] sdi1[4] sdp1[7] sdl1[6]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/7] [UUUU_UUU]
      [>....................]  recovery =  0.2% (5137892/1953513408) finish=921.7min speed=35227K/sec

DEGRADED EVENT on Jan 5th @ 12:30AM
md2000 : active raid6 sdk1[0] sdo1[3] sdn1[2] sdj1[8] sdi1[4] sdl1[6] sdp1[7]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
      [============>........]  recovery = 62.9% (1229102028/1953513408) finish=259.8min speed=46466K/sec

FAIL SPARE EVENT on Jan 5th @ 12:50AM
md2000 : active raid6 sdk1[0] sdo1[3] sdn1[2] sdj1[8](F) sdi1[4] sdl1[6] sdp1[7]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
      [=============>.......]  recovery = 68.1% (1332029020/1953513408) finish=150.3min speed=68897K/sec

DEGRADED EVENT on Jan 5th @ 6:43AM
md2000 : active raid6 sdk1[0] sdo1[3] sdn1[2] sdj1[8](F) sdi1[4] sdl1[6] sdp1[7]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
      [=============>.......]  recovery = 68.1% (1332029020/1953513408) finish=76028.6min speed=136K/sec

TEST MESSAGE on Jan 5th @ 8:30AM
md2000 : active raid6 sdo1[3] sdi1[4] sdn1[2] sdk1[0] sdl1[6] sdp1[7]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [U_UU_UUU]
What a mess; does anyone have any thoughts on the correct device ordering?

Last edited by fermulator; 01-06-2015 at 11:26 PM.
 
Old 01-07-2015, 07:30 AM   #11
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Was speaking with someone on IRC again in linux-raid; he suggested that if I've used --create, with a NEWER version of mdadm since when the array was originally created, the data offset on the devices could have changed. Oh.

Turns out it's true:
Click image for larger version

Name:	Selection_416.png
Views:	24
Size:	78.0 KB
ID:	17301

It's been suggestd that I need to compile the 3.3.x version of mdadm which has a configurable offset during creation apparently.
 
Old 01-07-2015, 08:49 AM   #12
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Unhappy

I downloaded the latest 3.3.2 mdadm, and tried re-creating with varying sector sizes but still didn't work. I'm at a loss.

Code:
fermulator@fermmy-server:~$ sudo mdadm --create /dev/md2000 --assume-clean --force --level=6 --raid-devices=8 --chunk=64 --layout=left-symmetric --metadata=1.1 --data-offset=variable --name=fermmy-server:2000 /dev/sdk1:264s missing /dev/sdn1:264s /dev/sdo1:264s /dev/sdi1:264s missing /dev/sdl1:264s /dev/sdp1:264s
mdadm: /dev/sdk1 appears to contain an ext2fs file system
       size=1695282944K  mtime=Tue Apr 12 11:10:24 1977
mdadm: /dev/sdk1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
mdadm: /dev/sdn1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
mdadm: /dev/sdo1 appears to contain an ext2fs file system
       size=1891628992K  mtime=Wed Apr 26 04:55:12 2028
mdadm: /dev/sdo1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
mdadm: /dev/sdi1 appears to contain an ext2fs file system
       size=1953513560K  mtime=Fri Apr 22 00:56:28 2011
mdadm: /dev/sdi1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
mdadm: /dev/sdl1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
mdadm: /dev/sdp1 appears to be part of a raid array:
       level=raid6 devices=8 ctime=Wed Jan  7 09:45:22 2015
Continue creating array? y
mdadm: array /dev/md2000 started.
Code:
md2000 : active (read-only) raid6 sdp1[7] sdl1[6] sdi1[4] sdo1[3] sdn1[2] sdk1[0]
      11721080448 blocks super 1.1 level 6, 64k chunk, algorithm 2 [8/6] [U_UUU_UU]
      bitmap: 0/15 pages [0KB], 65536KB chunk
Code:
fermulator@fermmy-server:~$ sudo mdadm --examine /dev/sd[ijklmnop]1

/dev/sdi1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : ad09753f:578cfffb:27da9786:f9710627

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : c7603819 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sdj1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x2
     Array UUID : 15d2158f:5cf74d95:fd7f5607:0e447573
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Fri Apr 22 01:12:07 2011
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
    Data Offset : 304 sectors
   Super Offset : 0 sectors 
Recovery Offset : 2441891840 sectors
   Unused Space : before=232 sectors, after=0 sectors
          State : clean
    Device UUID : eee3ae0e:f594fdba:58e19113:bc196464

    Update Time : Mon Jan  5 00:30:41 2015
       Checksum : 7a5a498d - correct
         Events : 42912

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 4
   Array State : A.AAAAAA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sdk1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : 57b015f5:243f6d96:56802d03:06e40727

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 9405a9c8 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdl1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : 9f0b9c89:4b3b911f:31a09b40:1b2c697f

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 477f692d - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 6
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdm1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x0
     Array UUID : 06e8d814:a3164ab8:eefb43b3:5461f971
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 00:35:28 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3906764976 (1862.89 GiB 2000.26 GB)
     Array Size : 11720294016 (11177.34 GiB 12001.58 GB)
  Used Dev Size : 3906764672 (1862.89 GiB 2000.26 GB)
    Data Offset : 262144 sectors
   Super Offset : 0 sectors 
   Unused Space : before=262072 sectors, after=304 sectors
          State : clean
    Device UUID : fcd9f241:df79d10f:d9a21c8c:19fe108b

    Update Time : Wed Jan  7 00:35:28 2015
       Checksum : f5e6e824 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAAAA.AA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sdn1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : 32e974ab:86772ef5:eedcf655:c1fa4708

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : dd2f8e5a - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)


/dev/sdo1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : 472d1067:aa54f4e2:9661018c:c9fe0b43

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : f75f3844 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 3
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)

/dev/sdp1:
          Magic : a92b4efc  
        Version : 1.1
    Feature Map : 0x1
     Array UUID : de003e29:aea4dbe3:5b889b64:e48df8d1
           Name : fermmy-server:2000  (local to host fermmy-server)
  Creation Time : Wed Jan  7 09:46:44 2015
     Raid Level : raid6
   Raid Devices : 8

 Avail Dev Size : 3907026856 (1863.02 GiB 2000.40 GB)
     Array Size : 11721080448 (11178.09 GiB 12002.39 GB)
  Used Dev Size : 3907026816 (1863.02 GiB 2000.40 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors 
   Unused Space : before=184 sectors, after=40 sectors
          State : clean
    Device UUID : e90f62d0:c9eed541:9106d664:4af51082

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jan  7 09:46:44 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : d76c5085 - correct
         Events : 0

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 7
   Array State : A.AAA.AA ('A' == active, '.' == missing, 'R' == replacing)
Code:
fermulator@fermmy-server:/media/arrays$ sudo mount -t ext4 -o ro /dev/md2000 /media/arrays/md2000
[sudo] password for fermulator:
mount: wrong fs type, bad option, bad superblock on /dev/md2000,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Last edited by fermulator; 01-07-2015 at 08:54 AM.
 
Old 01-09-2015, 08:18 AM   #13
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
To reply as my potentially last reply. At this point in time, I have been unable to recovery the array. All of my data is lost. I'm not touching these drives anymore for at least 1 month, in hopes that some expert on these forums, or linux-raid mailing list will know what to do.
 
Old 07-05-2015, 03:42 PM   #14
fermulator
LQ Newbie
 
Registered: Feb 2011
Posts: 19

Original Poster
Rep: Reputation: 1
Closing this off. I won't make the same mistake of using desktop-grade drives again in software raid without TLER support.

For my own sake, cleaning up my browser tabs, this is what I had open;
* https://raid.wiki.kernel.org/index.p...software_RAID?
* https://raid.wiki.kernel.org/index.php/Linux_Raid
* https://raid.wiki.kernel.org/index.php/Reconstruction
* http://serverfault.com/questions/347...ad-of-re-using
* http://www.accs.com/p_and_p/RAID/LinuxRAID.html
* https://www.thomas-krenn.com/en/wiki..._Software_RAID
* http://valentijn.sessink.nl/?p=557
* https://github.com/pturmel/lsdrv
* https://en.wikipedia.org/wiki/Error_recovery_control
* http://marc.info/?l=linux-raid&m=135811522817345&w=1
* http://marc.info/?l=linux-raid&m=133665797115876&w=2
* http://marc.info/?l=linux-raid&m=142504030927143&w=2

I also won't be so greedy next time with trying to achieve such a high storage efficiency ratio. Likely going with zmirror pools. (ZFS)
 
Old 07-06-2015, 09:56 AM   #15
S.Haran
LQ Newbie
 
Registered: Jun 2014
Location: Boston USA
Posts: 17

Rep: Reputation: Disabled
Where you able to recover your data?

I often work on failed mdadm RAID arrays and recovery is usually possible if the drives are sound. I would be happy to assist.
 
1 members found this post helpful.
  


Reply

Tags
mdadm


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] "net rpc" "failed to connect to ipc$ share on" or "unable to find a suitable server" larieu Linux - General 0 11-09-2014 12:45 AM
mdadm raid6 active despite 3 drive failures roboa1983 Linux - Server 2 07-26-2011 09:34 PM
mdadm - RAID5 to RAID6, Spare won't become Active Fmstrat Linux - General 7 06-21-2011 09:52 PM
mdadm says "mdadm: /dev/md1 not identified in config file" when booting FC7 raffeD Linux - Server 1 08-11-2008 11:47 AM
RAID 5 with mdadm "spare" and "active sync" confusion ufmale Linux - Server 1 12-08-2007 10:31 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 03:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration