LinuxQuestions.org
Register a domain and help support LQ
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices



Reply
 
Search this Thread
Old 08-24-2012, 02:42 PM   #1
rajzmailbox
LQ Newbie
 
Registered: Mar 2012
Posts: 7

Rep: Reputation: Disabled
RAID5 not active


Hi,

We had a power failure and since then my RAID array is not coming up. I am using OpenSUSE 10.3
When I try to mount it
Code:
 # mount /dev/md3 /nativ
mount: you must specify the filesystem type
Code:
# mdadm --detail /dev/md3
mdadm: md device /dev/md3 does not appear to be active.
fdsik -l
Code:
# fdisk -l

Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0009d7e4

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1        3134    25173823+  fd  Linux raid autodetect
/dev/sda2   *        3135       14593    92044417+  fd  Linux raid autodetect

Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b4651

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3134    25173823+  fd  Linux raid autodetect
/dev/sdb2            3135       14593    92044417+  fd  Linux raid autodetect

Disk /dev/md1: 94.2 GB, 94253408256 bytes
2 heads, 4 sectors/track, 23011086 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md0: 25.7 GB, 25777917952 bytes
2 heads, 4 sectors/track, 6293437 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d3e99

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x5c2face6

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000001

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000d272b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1      121601   976760001   83  Linux

Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0003805f

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdh: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000e6da1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x5c2facf2

   Device Boot      Start         End      Blocks   Id  System
/dev/sdi1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdj: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x000b6b5b

   Device Boot      Start         End      Blocks   Id  System
/dev/sdj1               1      121601   976760001   fd  Linux raid autodetect

Disk /dev/sdk: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0003ac92

   Device Boot      Start         End      Blocks   Id  System
/dev/sdk1               2      121601   976752000   83  Linux
/dev/sdk4               1           1           0+  ee  EFI GPT

Partition table entries are not in disk order
Please help me getting this backing working.
let me know if you need more info

Thank you.
 
Old 08-25-2012, 07:16 AM   #2
maniannam
Member
 
Registered: Dec 2007
Location: India
Distribution: fedora 11
Posts: 64

Rep: Reputation: 15
Hello
Provide the /proc/mdstat file output



Regards
Manianna
 
Old 08-25-2012, 11:42 AM   #3
rajzmailbox
LQ Newbie
 
Registered: Mar 2012
Posts: 7

Original Poster
Rep: Reputation: Disabled
Hello Manianna,

the output of /proc/mdstat

Code:
# cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4]
md0 : active(auto-read-only) raid1 sda1[0] sdb1[2]
      25173748 blocks super 1.0 [2/2] [UU]
      bitmap: 0/193 pages [0KB], 64KB chunk

md1 : active raid1 sda2[2] sdb2[1]
      92044344 blocks super 1.0 [2/2] [UU]
      bitmap: 8/176 pages [32KB], 256KB chunk

unused devices: <none>
Thank You
 
Old 08-26-2012, 03:40 PM   #4
maniannam
Member
 
Registered: Dec 2007
Location: India
Distribution: fedora 11
Posts: 64

Rep: Reputation: 15
Hello,

based on your mdstat file output, I think there is no md3. anyway try to do examine the md3. "mdadm --examine <device name>"

Thanks.
 
Old 08-26-2012, 05:29 PM   #5
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 583

Rep: Reputation: 129Reputation: 129
What is the output of
Code:
mdadm -E /dev/sd[c-j]1
 
Old 08-27-2012, 12:45 PM   #6
rajzmailbox
LQ Newbie
 
Registered: Mar 2012
Posts: 7

Original Poster
Rep: Reputation: Disabled
mdadm --examine /dev/md3
Code:
 # mdadm --examine /dev/md3
mdadm: No md superblock detected on /dev/md3.
output of mdadm -E /dev/sd[c-j]1
Code:
 # mdadm -E /dev/sd[c-j]1
/dev/sdc1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : 4d73af4d:a357a2b8:188c9bf9:f473090d

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 1219e0a6 - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 4 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuu_Uuu_ 2 failed
/dev/sdd1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : b7a550a8:e4965374:377ba95a:5b2ee41c

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 9954fbd7 - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 5 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuu_uUu_ 2 failed
/dev/sde1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : c498b8f9:f6825ca1:dddace09:1a04adbd

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 67b4105d - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 6 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuu_uuU_ 2 failed
/dev/sdf1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : 75189995:fcb6fd82:dbb98c44:6e49b3c7

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 29f9e868 - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 8 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuu_uuu_ 2 failed
/dev/sdg1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : 542026c3:9991c88a:f967cb02:fad51719

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 6ef50585 - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 0 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : Uuu_uuu_ 2 failed
/dev/sdh1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : 436835f2:1a0f6b34:da9e1781:0fc37544

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : f150eeec - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 1 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uUu_uuu_ 2 failed
/dev/sdi1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : f60340dd:47f6aa79:5efb52ec:d8ef40fd

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : 45a1fb1c - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 2 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuU_uuu_ 2 failed
/dev/sdj1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
           Name : 3
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
   Raid Devices : 8

  Used Dev Size : 1953519728 (931.51 GiB 1000.20 GB)
     Array Size : 13674637312 (6520.58 GiB 7001.41 GB)
      Used Size : 1953519616 (931.51 GiB 1000.20 GB)
   Super Offset : 1953519984 sectors
          State : clean
    Device UUID : 694c7e73:ca461d43:38165447:030c34ae

Internal Bitmap : -234 sectors from superblock
    Update Time : Thu Mar 29 17:00:31 2012
       Checksum : b146cb1c - correct
         Events : 71212

         Layout : left-symmetric
     Chunk Size : 128K

    Array Slot : 9 (0, 1, 2, failed, 4, 5, 6, failed)
   Array State : uuu_uuu_ 2 failed
 
Old 08-27-2012, 01:47 PM   #7
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 583

Rep: Reputation: 129Reputation: 129
What you could try is
Code:
mdadm --assemble /dev/md3 --scan --force
 
Old 08-27-2012, 03:08 PM   #8
rajzmailbox
LQ Newbie
 
Registered: Mar 2012
Posts: 7

Original Poster
Rep: Reputation: Disabled
the output of mdadm --assemble /dev/md3 --scan --force

Code:
 # mdadm --assemble /dev/md3 --scan --force
mdadm: /dev/md3 assembled from 6 drives and 2 spares - not enough to start the array.
This problem is driving me crazy. Here is what happened. about a month ago A colleague of mine has unknowningly pulled a HDD and placed it back and also added a new one when the system was ON/active. Could this cause the problem? All the while I thought it was due to power failure. Could you please help me fix this?
 
Old 08-28-2012, 02:09 AM   #9
eantoranz
Senior Member
 
Registered: Apr 2003
Location: Colombia
Distribution: Kubuntu, Debian, Knoppix
Posts: 1,982
Blog Entries: 1

Rep: Reputation: 83
Hey..... I guess you could try this as a last resort:

http://www.freesoftwaremagazine.com/.../recovery_raid
https://code.launchpad.net/~eantoran...k/raidpycovery
 
Old 08-28-2012, 03:42 PM   #10
whizje
Member
 
Registered: Sep 2008
Location: The Netherlands
Distribution: Slackware64 current
Posts: 583

Rep: Reputation: 129Reputation: 129
There is another option with which you have to recreate your array, but it is risky so I advice you to subscribe to the linux-raid mailinglist by sending a mail to mailto:majordomo@vger.kernel.org?body=subscribe linux-raid and ask there for your best options. It is best to also send output of mdadm -E /dev/sd[c-j]1 there. The creator of mdadm is also often there and they can tell you what you have to do to save your raid array.

Last edited by whizje; 08-28-2012 at 03:44 PM.
 
Old 08-30-2012, 07:18 PM   #11
rajzmailbox
LQ Newbie
 
Registered: Mar 2012
Posts: 7

Original Poster
Rep: Reputation: Disabled
at last my raid has become active but not started and is degraded
Code:
 # mdadm --detail /dev/md3
/dev/md3:
        Version : 01.00.03
  Creation Time : Sun Oct 23 16:04:53 2011
     Raid Level : raid5
  Used Dev Size : 976759808 (931.51 GiB 1000.20 GB)
   Raid Devices : 8
  Total Devices : 8
Preferred Minor : 3
    Persistence : Superblock is persistent

    Update Time : Thu Mar 29 17:00:31 2012
          State : active, degraded, Not Started
 Active Devices : 6
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 128K

           Name : 3
           UUID : ff736409:5b1a6101:0f1f75c0:529a4f95
         Events : 71212

    Number   Major   Minor   RaidDevice State
       0       8      241        0      active sync   /dev/sdp1
       1      65        1        1      active sync   /dev/sdq1
       2      65       17        2      active sync   /dev/sdr1
       3       0        0        3      removed
       4       8      177        4      active sync   /dev/sdl1
       5       8      193        5      active sync   /dev/sdm1
       6       8      209        6      active sync   /dev/sdn1
       7       0        0        7      removed

       8       8       33        -      spare   /dev/sdc1
      10      65       48        -      spare   /dev/sdt
What should I be doing next to get it working?

Thank you
 
  


Reply

Tags
mdadm, raid5


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: 'Kubuntu Active' Daily Builds For Tablets Now Available, Based On Plasma Active And 12.04 LTS LXer Syndicated Linux News 0 03-17-2012 03:40 AM
mdadm - RAID5 to RAID6, Spare won't become Active Fmstrat Linux - General 7 06-21-2011 10:52 PM
LXer: Setting Up An Active/Active Samba CTDB Cluster Using GFS & DRBD (CentOS 5.5) LXer Syndicated Linux News 0 12-03-2010 11:00 AM
Recreate a Raid5 MD0 into a Raid5 MD3 CADIT Linux - Server 2 01-11-2010 05:46 AM
Multi Layer RAID50 fail (Intel SRCS14L RAID5 + 3ware 9550SX-4LP RAID5)+Linux RAID 0 BaronVonChickenPants Linux - Server 4 09-27-2009 05:06 AM


All times are GMT -5. The time now is 05:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration