LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 06-27-2012, 09:42 PM   #1
fatmann66
LQ Newbie
 
Registered: Jun 2012
Posts: 5

Rep: Reputation: Disabled
Help with raid 5 recover


running 10.04lts
have a raid 5 configuration with 4 drives. took a power blip on 2 of the drives (loose cable I think) array went to clean degraded. this has happened before and i normally fail the drive and re-add no big deal. wasn't able to do it this time.

Code:
im@U-NAS:~$ sudo mdadm --detail /dev/md0 

/dev/md0:
        Version : 00.90
  Creation Time : Mon Aug 22 20:39:28 2011
     Raid Level : raid5
     Array Size : 2197554816 (2095.75 GiB 2250.30 GB)
  Used Dev Size : 732518272 (698.58 GiB 750.10 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Jun 27 21:18:52 2012
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 2
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 49fae06a:d9294001:89ae0215:c5226ea0
         Events : 0.384

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1

       4       8       17        -      faulty spare   /dev/sdb1
       5       8       33        -      faulty spare   /dev/sdc1
Code:
        Version : 00.90
  Creation Time : Mon Aug 22 20:39:28 2011
     Raid Level : raid5
     Array Size : 2197554816 (2095.75 GiB 2250.30 GB)
  Used Dev Size : 732518272 (698.58 GiB 750.10 GB)
   Raid Devices : 4
  Total Devices : 2
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Wed Jun 27 21:34:04 2012
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 49fae06a:d9294001:89ae0215:c5226ea0
         Events : 0.412

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       0        0        1      removed
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
I wasn't able to add the drives back (kept getting drive busy) so i stopped the array.

now I keep getting errors stating /dev/md0 assembled from 2 drives - not enough to start the array.
Code:
jim@U-NAS:~$ sudo mdadm --assemble --scan -v
mdadm: looking for devices for /dev/md0
mdadm: no recogniseable superblock on /dev/block/252:0
mdadm: /dev/block/252:0 has wrong uuid.
mdadm: no RAID superblock on /dev/sde
mdadm: /dev/sde has wrong uuid.
mdadm: no RAID superblock on /dev/sdd
mdadm: /dev/sdd has wrong uuid.
mdadm: no RAID superblock on /dev/sdc1
mdadm: /dev/sdc1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc
mdadm: /dev/sdc has wrong uuid.
mdadm: no RAID superblock on /dev/sdb1
mdadm: /dev/sdb1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdb
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda5: Device or resource busy
mdadm: /dev/sda5 has wrong uuid.
mdadm: no RAID superblock on /dev/sda2
mdadm: /dev/sda2 has wrong uuid.
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sde1 is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: no uptodate device for slot 0 of /dev/md0
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sde1 to /dev/md0 as 3
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/block/252:0
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: cannot open device /dev/sdd: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdc1
mdadm: no recogniseable superblock on /dev/sdc
mdadm: no recogniseable superblock on /dev/sdb1
mdadm: no recogniseable superblock on /dev/sdb
mdadm: cannot open device /dev/sda5: Device or resource busy
mdadm: no recogniseable superblock on /dev/sda2
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: cannot open device /dev/sda: Device or resource busy
any help would be great!!
 
Old 06-28-2012, 02:58 PM   #2
zer0signal
Member
 
Registered: Oct 2010
Location: Cleveland
Distribution: Slackware, Fedora, RHEL (4,5), LFS 6.7, CentOS
Posts: 258

Rep: Reputation: 29
mdadm --fail /dev/md0 /dev/sdd1 /dev/sde1

mdadm --remove /dev/md0 /dev/sdd1 /dev/sde1

mdadm --stop /dev/md0
mdadm --remove /dev/md0

mdadm --assemble --scan -v

^^ if that does not work, than do the same thing but assemble it again this way;

mdadm -Cv /dev/md1 -l5 -n4 /dev/sd[bcde]1

Last edited by zer0signal; 06-28-2012 at 03:00 PM.
 
1 members found this post helpful.
Old 06-28-2012, 07:45 PM   #3
fatmann66
LQ Newbie
 
Registered: Jun 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
Thanks for the response this is the results i had with the suggested commands:

Code:
jim@U-NAS:~$ sudo mdadm --fail /dev/md0 /dev/sdd1 /dev/sde1
mdadm: cannot get array info for /dev/md0

jim@U-NAS:~$ sudo mdadm --fail /dev/sdd1 /dev/sde1
mdadm: /dev/sdd1 does not appear to be an md device

jim@U-NAS:~$ sudo mdadm --remove /dev/md0 /dev/sdd1 /dev/sde1
mdadm: cannot get array info for /dev/md0
ed
jim@U-NAS:~$ sudo mdadm --remove /dev/md0
remove command complete without issue.
scan failed and the last command wouldn't complet I assume because I wasn't able to removed the other two drives.
Code:
jim@U-NAS:~$ sudo mdadm -Cv /dev/md1 -l5 -n4 /dev/sd[bcde]
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 64K
mdadm: Cannot open /dev/sdd: Device or resource busy
mdadm: Cannot open /dev/sde: Device or resource busy
mdadm: create aborted
where should i go from here??
 
Old 06-30-2012, 10:01 AM   #4
fatmann66
LQ Newbie
 
Registered: Jun 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
when i do a:sudo cat /proc/mdstat
I get

Code:
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : inactive sdd1[2](S) sde1[3](S)
      1465036544 blocks
I'm assuming this is why those 2 drives state they are busy. any ideas on how to release / remove them so i can run a rescan to recreate the raid??
 
Old 07-01-2012, 02:19 PM   #5
fatmann66
LQ Newbie
 
Registered: Jun 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
I was finally able to get the array reinitialized with my data intacted. I ended up rebooting the system. it seems like one of the drives still had a loose cable and wasn't being detected within the sata card bios. I fixed this and all drive came up.

I did a: sudo cat /proc/mdstat
and it listed two partial raid arrays. MD0 and MD_D0
I stopped both of these
Code:
mdadm --stop /dev/md0
mdadm --stop /dev/md_d0
then tried running a scan
Code:
sudo mdadm --assemble --scan -fv
but it wouldn't rebuild the array

ran the following command specifying the array and drive in order.


Code:
sudo mdadm --create --assume-clean --level=5 --raid-devices=4 /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
this was the results
Code:
mdadm: /dev/sdb1 appears to contain an ext2fs file system
    size=-2097414144K  mtime=Tue Jun 26 22:05:58 2012
mdadm: /dev/sdb1 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Aug 22 20:39:28 2011
mdadm: /dev/sdc1 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Mon Aug 22 20:39:28 2011
mdadm: /dev/sdd1 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Sun Jul  1 14:38:38 2012
mdadm: /dev/sde1 appears to contain an ext2fs file system
    size=-1828978688K  mtime=Tue Jun 26 21:52:12 2012
mdadm: /dev/sde1 appears to be part of a raid array:
    level=raid5 devices=4 ctime=Sun Jul  1 14:38:38 2012
Continue creating array? y
mdadm: array /dev/md0 started.
jim@U-NAS:~$ sudo mdadm --detail
mdadm: No devices given.
jim@U-NAS:~$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 00.90
  Creation Time : Sun Jul  1 14:55:44 2012
     Raid Level : raid5
     Array Size : 2197554816 (2095.75 GiB 2250.30 GB)
  Used Dev Size : 732518272 (698.58 GiB 750.10 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sun Jul  1 14:56:26 2012
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 17d2e6ce:b5dc496f:7a681604:9baa1d99 (local to host U-NAS)
         Events : 0.2

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
       3       8       65        3      active sync   /dev/sde1
Array came online instantly and share mounted with data intacted.

the major issue i was having through all this was a generic error the drive /dev/sd*1 was busy. this happen on all drives at one time or another. I believe this was due to the fact of a drive being powered down and / or being listed in the: sudo cat /proc/mdstat

my fixing the loose power cable and stopping the /dev/md* the last assembly command was able to access each drive and add then to the array.
 
Old 07-01-2012, 09:56 PM   #6
zer0signal
Member
 
Registered: Oct 2010
Location: Cleveland
Distribution: Slackware, Fedora, RHEL (4,5), LFS 6.7, CentOS
Posts: 258

Rep: Reputation: 29
that is essentially what this command would have done... you just have to get the old array to stop;

mdadm -Cv /dev/md1 -l5 -n4 /dev/sd[bcde]1
 
Old 07-02-2012, 05:36 PM   #7
fatmann66
LQ Newbie
 
Registered: Jun 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
Zer0signal,

thanks, that is what i thought. I'm sure if i caught that the one drive was offline your command would have worked.

question, as i mentioned during all this the mdstat listed a another md " MD_d0. when i rebooted after installing some patch, the boot paused stating it couldn't mount my raid5 array (md0). I shipped the mount and when i logged on and cat mdstat the md_d0 as listed again. I issued a stop and then was able to mount md0. I'm not sure why md_d0 keeps poping up. do you have any suggestions on how to prevent this.
thanks,Fatmann66
 
Old 07-02-2012, 10:01 PM   #8
zer0signal
Member
 
Registered: Oct 2010
Location: Cleveland
Distribution: Slackware, Fedora, RHEL (4,5), LFS 6.7, CentOS
Posts: 258

Rep: Reputation: 29
I am not sure, I had issues when I moved my array to another Linux Box - and UDev I believe was assembling it wrong... So i just moved the files:

64-md-raid.rules
65-md-incremental.rules

from the directory:
/etc/udev/rules.d

cleared out my /etc/mdadm.conf

Then I put in my /etc/rc.local file:

mdadm --assemble --scan
vgchange -ay
mount /dev/vg_nas_file/lv_nas_file /nfs/global

and now when it reboots, my array ALWAYS assembles properly... Now I am positive this is not the 'PROPER' way to resolve the issue, but it has worked for me for months, and multiple Linux boxes this storage has resided on.

Last edited by zer0signal; 07-02-2012 at 10:02 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Recover Raid 5 Milchknilch Linux - Software 1 03-17-2011 11:38 PM
How to Recover RAID 5 myasir_genious Linux - Server 2 07-11-2009 09:31 AM
Recover data from RAID 0 (fake RAID) from Win XP KiAnKo Linux - Hardware 4 04-23-2009 12:28 PM
Software RAID to recover data from HW RAID ocschwar Linux - Hardware 5 02-25-2005 10:05 PM
need help to recover my RAID 1 glock27linux Linux - General 0 04-10-2003 09:04 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 08:24 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration