Share your knowledge at the LQ Wiki.
Go Back > Forums > Linux Forums > Linux - Desktop
User Name
Linux - Desktop This forum is for the discussion of all Linux Software used in a desktop context.


  Search this Thread
Old 01-14-2013, 10:50 AM   #1
LQ Newbie
Registered: Jan 2013
Posts: 2

Rep: Reputation: Disabled
mdadm: Trouble with raid6

Hi all,

I am newbie at mdadm. We have computer with raid6 used as data pool mounted at Ubuntu. Everything goes fine with raid, until two weeks before computer crashed and mounted pool isn't available now.

When I restart the machine, it gets to a point where it says that "/mnt/pool is not present or not available". At this point I have pressed "S" and started up Ubuntu anyway. If I run palimpsest I see all disks and RAID, but the raid with /mnt/pool
had state: "Partially assembled, not running". I also see disks from raid, they have SMART status: "Disk is healthy" or "Disk has a few bad sectors".

Anyway I read many posts about this, I tried to stop raid via palimpsest and start it again, but now I get error:
Error assembling array: mdadm exited with exit code 1: mdadm:

metadata format 01.02 unknown, ignored.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has no superblock - assembly aborted

I have importaint data at raid, so I can't experiment with this and I am finding some help.

Some information about system and raid:

Ubuntu 10.04, 2.6.32-33-generic


Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdh1[0] sdf1[3] sdg1[2] sde1[1]
7814047460 blocks super 1.2

md1 : active raid1 sdd2[1] sdc2[0]
7833536 blocks [2/2] [UU]

md0 : active raid1 sdd1[1] sdc1[0]
50780096 blocks [2/2] [UU]

unused devices: <none>

# mdadm.conf
# Please refer to mdadm.conf(5) for information about this file.

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=3ac84499:ba962435:4d0ac48f:0dedaf16
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=744161aa:ba92c60d:ae5af280:697c2a6e
ARRAY /dev/md2 level=raid6 num-devices=6 metadata=01.02 name=ool1 UUID=52544974:24687624:d1188992:95f07e6b

# This file was auto-generated on Fri, 30 Jul 2010 22:52:29 +0100
# by mkconf $Id$

mdadm --detail /dev/md2

mdadm: metadata format 01.02 unknown, ignored.
Version : 01.02
Creation Time : Sun Aug 8 22:50:41 2010
Raid Level : raid6
Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
Raid Devices : 6
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Tue Jan 1 18:39:38 2013
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Name : ool1
UUID : 52544974:24687624:d1188992:95f07e6b
Events : 541811

Number Major Minor RaidDevice State
0 8 113 0 active sync /dev/sdh1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 0 0 5 removed

What it does mean removed? It is possible that SATA controller or disks are bad for example for problem with bad sectors at disks? But I see all disks at palimpsest.

Many thanks for some help.
Old 01-14-2013, 11:41 AM   #2
Senior Member
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,632

Rep: Reputation: 704Reputation: 704Reputation: 704Reputation: 704Reputation: 704Reputation: 704Reputation: 704
cat /proc/partitions
to list all of the disks and partitions in your system. Try to identify the two missing partitions from /dev/md2. If they aren't present in the list, maybe you have a hardware problem like a cable not plugged in. If they are there but not showing as RAID devices, maybe they were somehow overwritten.
Old 01-14-2013, 02:30 PM   #3
LQ Newbie
Registered: Jan 2013
Posts: 2

Original Poster
Rep: Reputation: Disabled
Thanks for you reply.
Output from cat /proc/partitions is there:

major minor #blocks name

8 0 1953514584 sda
8 1 1953512001 sda1
8 16 1953514584 sdb
8 17 1953512001 sdb1
8 32 58615704 sdc
8 33 50780160 sdc1
8 34 7833600 sdc2
9 0 50780096 md0
9 1 7833536 md1
8 48 58615704 sdd
8 49 50780160 sdd1
8 50 7833600 sdd2
8 64 1953514584 sde
8 65 1953512001 sde1
8 80 1953514584 sdf
8 81 1953512001 sdf1
8 96 1953514584 sdg
8 97 1953512001 sdg1
8 112 1953514584 sdh
8 113 1953512001 sdh1

All partitions from raid are available.
Does anybody have some idea?
Old 01-21-2013, 08:08 PM   #4
LQ Newbie
Registered: Mar 2010
Posts: 23

Rep: Reputation: 0
Are you sure you set up this raid6 with 4 disks?

Besides, please try to STOP the array with mdadm and then to reassemble it:

mdadm --stop /dev/md2
mdadm assemble --scan /dev/md2


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
Software mdadm RAID6 recovery - how to reassemble badly broken array eduardr Linux - Server 0 09-20-2011 12:00 PM
mdadm raid6 active despite 3 drive failures roboa1983 Linux - Server 2 07-26-2011 10:34 PM
mdadm - RAID5 to RAID6, Spare won't become Active Fmstrat Linux - General 7 06-21-2011 10:52 PM
mdadm & raid6: does re-create(with different chunk size)+resync destroy orig. data? schanhorst Linux - Server 1 10-14-2010 09:06 PM
MDADM RAID5 coversion to RAID6 and drive sizes. kripz Linux - Server 2 12-03-2009 07:33 AM

All times are GMT -5. The time now is 11:34 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration