LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 10-21-2015, 01:39 PM   #1
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Rep: Reputation: 0
RAID grow interrupted.


Background. I setup a RAID array with 2 disks at raid level 5. I then did LVM+XFS on the RAID. All was good. I then copied over my data that I had on another identical drive and then fdisked and added the drive to the RAID. After a reboot the raid no longer works.
Code:
sharp@MotherShip ~ $ uname -a
Linux MotherShip 4.2.0-sabayon #1 SMP Mon Oct 19 08:36:08 UTC 2015 x86_64 AMD FX(tm)-8120 Eight-Core Processor AuthenticAMD GNU/Linux
Here is where it gets interesting. mdadm thinks the raid level is 0 which is incorrect.
Code:
MotherShip ~ # mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 3
    Persistence : Superblock is persistent

          State : inactive

  Delta Devices : 1, (-1->0)
      New Level : raid5
     New Layout : left-symmetric
  New Chunksize : 512K

           Name : sabayon:0
           UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
         Events : 13648

    Number   Major   Minor   RaidDevice

       -       8        1        -        /dev/sda1
       -       8       17        -        /dev/sdb1
       -       8       33        -        /dev/sdc1
Code:
MotherShip ~ # cat /proc/mdstat 
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] [faulty] 
md127 : inactive sda1[0](S) sdb1[2](S) sdc1[3](S)
      11720659414 blocks super 1.2
       
unused devices: <none>

Code:
MotherShip ~ # mdadm --assemble /dev/md127
mdadm: Failed to restore critical section for reshape, sorry.
       Possibly you needed to specify the --backup-file
I have done some Googling but have not come up with anything really. I read on one site to recreate the raid array and it will not delete the data on the disks. http://www.geekride.com/activating-a...ve-raid-array/
I am concerned about losing my data on these drives....

I don't have very much experience with linux software RAID's.

Last edited by pacmanlives; 10-27-2015 at 11:46 AM.
 
Old 10-21-2015, 03:16 PM   #2
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Original Poster
Rep: Reputation: 0
I am wondering if the grow of the disk was interrupted. I do have the backup file though. If that was the case can I resume it?

Code:
MotherShip ~ # mdadm --examine /dev/sd*
/dev/sda:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sda1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x5
     Array UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
           Name : sabayon:0
  Creation Time : Mon Oct 19 14:25:10 2015
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7813772943 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=655 sectors
          State : active
    Device UUID : bb06f7b5:88698e4d:4040e90a:2582d559

Internal Bitmap : 8 sectors from superblock
  Reshape pos'n : 0
  Delta Devices : 1 (2->3)

    Update Time : Wed Oct 21 10:59:31 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 28c1e479 - correct
         Events : 13648

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdb:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x5
     Array UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
           Name : sabayon:0
  Creation Time : Mon Oct 19 14:25:10 2015
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7813772943 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=655 sectors
          State : active
    Device UUID : f70d65c6:9474b3e7:d55ad21e:ca1ee297

Internal Bitmap : 8 sectors from superblock
  Reshape pos'n : 0
  Delta Devices : 1 (2->3)

    Update Time : Wed Oct 21 10:59:31 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 254aadfe - correct
         Events : 13648

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x5
     Array UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
           Name : sabayon:0
  Creation Time : Mon Oct 19 14:25:10 2015
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 7813772943 (3725.90 GiB 4000.65 GB)
     Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
  Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
    Data Offset : 262144 sectors
   Super Offset : 8 sectors
   Unused Space : before=262056 sectors, after=655 sectors
          State : clean
    Device UUID : d2aa2a14:3f29ecb6:33e13306:57bbfac0

Internal Bitmap : 8 sectors from superblock
  Reshape pos'n : 0
  Delta Devices : 1 (2->3)

    Update Time : Wed Oct 21 10:59:31 2015
  Bad Block Log : 512 entries available at offset 72 sectors
       Checksum : 52c3226f - correct
         Events : 13648

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdd:
   MBR Magic : aa55
Partition[0] :      1024000 sectors at         2048 (type 83)
Partition[1] :     61504576 sectors at      1026048 (type 8e)
mdadm: No md superblock detected on /dev/sdd1.
mdadm: No md superblock detected on /dev/sdd2.

Last edited by pacmanlives; 10-21-2015 at 03:41 PM.
 
Old 10-21-2015, 03:43 PM   #3
Red Squirrel
Senior Member
 
Registered: Dec 2003
Distribution: Mint 20.1 on workstation, Debian 11 on servers
Posts: 1,336

Rep: Reputation: 54
So you did a raid 5 with 2 drives correct? That should technically work as it will just be degraded but I find mdadm does weird things when you try that. I did a raid 10 with 7 drives once figuring it would just show as one drive missing, but instead it somehow did a full raid 10 with that many drives. I ended up having to completely rebuild it as it would not even let me add the other drive.
 
Old 10-21-2015, 04:31 PM   #4
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Red Squirrel View Post
So you did a raid 5 with 2 drives correct? That should technically work as it will just be degraded but I find mdadm does weird things when you try that. I did a raid 10 with 7 drives once figuring it would just show as one drive missing, but instead it somehow did a full raid 10 with that many drives. I ended up having to completely rebuild it as it would not even let me add the other drive.
I did that initially and then I added a 3rd disk.
 
Old 10-21-2015, 04:39 PM   #5
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Original Poster
Rep: Reputation: 0
So I was able to recreate this and I now know for sure that it was the grow of the raid that was interrupted caused this.

Steps to recreate this issue.
1. Create VM. I used a CentOS 7 vm for this.
2. Add 3 disks to VM.
3. Create RAID. mdadm --create md0 -l 5 --raid-devices=2 /dev/sdb /dev/sdc
4. Format RAID. mkfs.xfs /dev/md0
4. Mount /dev/md0
5. I created some files: touch file{1..1000}.txt
6. Add other disk to RAID. mdadm --manage md0 --add /dev/sdd
7. Grow RAID with other disk. mdadm --grow --raid-devices=3 --backup-file=/root/grow_md1.bak /dev/md0
8. Reboot.
Code:
[root@localhost ~]# cat /proc/mdstat 
Personalities : 
md0 : inactive sdd[3](S) sdb[0](S) sdc[2](S)
      1569792 blocks super 1.2
       
unused devices: <none>
Code:
[root@localhost ~]# mdadm --detail /dev/md0 
/dev/md0:
        Version : 1.2
     Raid Level : raid0
  Total Devices : 3
    Persistence : Superblock is persistent

          State : inactive

  Delta Devices : 1, (-1->0)
      New Level : raid5
     New Layout : left-symmetric
  New Chunksize : 512K

           Name : localhost.localdomain:md0  (local to host localhost.localdomain)
           UUID : 76857705:5e03b7be:9993da68:8d766598
         Events : 27

    Number   Major   Minor   RaidDevice

       -       8       16        -        /dev/sdb
       -       8       32        -        /dev/sdc
       -       8       48        -        /dev/sdd

Last edited by pacmanlives; 10-21-2015 at 05:14 PM.
 
Old 10-22-2015, 02:31 PM   #6
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Original Poster
Rep: Reputation: 0
I was able to get it to report the right RAID type and make it in an active state but it will not enter a running state...
Code:
MotherShip ~ # mdadm --manage  /dev/md127 -R

MotherShip ~ # mdadm --detail /dev/md127              
/dev/md127:
        Version : 1.2
  Creation Time : Mon Oct 19 14:25:10 2015
     Raid Level : raid5
  Used Dev Size : -1
   Raid Devices : 3
  Total Devices : 3
    Persistence : Superblock is persistent

    Update Time : Wed Oct 21 10:59:31 2015
          State : active, Not Started 
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

  Delta Devices : 1, (2->3)

           Name : sabayon:0
           UUID : f75d2c00:d2a2ee55:eed84606:4ef98acf
         Events : 13648

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       8       17        1      active sync   /dev/sdb1
       3       8       33        2      active sync   /dev/sdc1
 
Old 10-27-2015, 11:45 AM   #7
pacmanlives
LQ Newbie
 
Registered: Sep 2009
Location: Boulder, CO
Distribution: Sabayon
Posts: 10

Original Poster
Rep: Reputation: 0
Well I fixed it. I was scared to hell to do this but it did work for me and my data seems to be there.

Solution:
1. I popped out the last drive I added.
2. mdadm --stop /dev/md127
3. mdadm --create md0 -l 5 --raid-devices=2 /dev/sda1 /dev/sdb1
4.
Code:
MotherShip ~ # mdadm --detail /dev/md127 
/dev/md127:
        Version : 1.2
  Creation Time : Tue Oct 27 09:08:13 2015
     Raid Level : raid5
     Array Size : 3906886144 (3725.90 GiB 4000.65 GB)
  Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Oct 27 09:14:24 2015
          State : clean, degraded, recovering 
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 1% complete

           Name : MotherShip:md0  (local to host MotherShip)
           UUID : fdb036a2:3ab017be:b3b70bd0:af691ae1
         Events : 84

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       8       17        1      spare rebuilding   /dev/sdb1
Code:
mount -t xfs  /dev/mapper/StoragePool1-storage1 /mnt/storage/
Code:
MotherShip ~ # mount
...
/dev/mapper/StoragePool1-storage1 on /mnt/storage type xfs (rw,relatime,attr2,inode64,sunit=1024,swidth=1024,noquota)
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] How to check a hardware level raid with PERC H710 raid controller? abelosorio Debian 1 01-13-2015 09:57 AM
Recover mdadm RAID after failure during RAID level change Caetel Linux - General 1 11-07-2013 10:38 PM
RAID level 0 issue yasir453 Linux - Desktop 1 04-20-2010 04:34 AM
Dell/Intel ICH7 soft-RAID and mdadm raid-level mistake PhilipTheMouse Linux - General 0 03-14-2009 05:59 PM
which raid level (RAID 5 or RAID 10) inspiredbymetal Linux - Server 4 11-25-2007 07:59 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 11:08 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration