LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Desktop (https://www.linuxquestions.org/questions/linux-desktop-74/)
-   -   mdadm: Trouble with raid6 (https://www.linuxquestions.org/questions/linux-desktop-74/mdadm-trouble-with-raid6-4175445544/)

Tomyx 01-14-2013 09:50 AM

mdadm: Trouble with raid6
 
Hi all,

I am newbie at mdadm. We have computer with raid6 used as data pool mounted at Ubuntu. Everything goes fine with raid, until two weeks before computer crashed and mounted pool isn't available now.

When I restart the machine, it gets to a point where it says that "/mnt/pool is not present or not available". At this point I have pressed "S" and started up Ubuntu anyway. If I run palimpsest I see all disks and RAID, but the raid with /mnt/pool
had state: "Partially assembled, not running". I also see disks from raid, they have SMART status: "Disk is healthy" or "Disk has a few bad sectors".

Anyway I read many posts about this, I tried to stop raid via palimpsest and start it again, but now I get error:
Error assembling array: mdadm exited with exit code 1: mdadm:

metadata format 01.02 unknown, ignored.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has no superblock - assembly aborted


I have importaint data at raid, so I can't experiment with this and I am finding some help.

Some information about system and raid:

system:
Ubuntu 10.04, 2.6.32-33-generic

/proc/mdstat:

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdh1[0] sdf1[3] sdg1[2] sde1[1]
7814047460 blocks super 1.2

md1 : active raid1 sdd2[1] sdc2[0]
7833536 blocks [2/2] [UU]

md0 : active raid1 sdd1[1] sdc1[0]
50780096 blocks [2/2] [UU]

unused devices: <none>


/etc/mdadm/mdadm.conf:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=3ac84499:ba962435:4d0ac48f:0dedaf16
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=744161aa:ba92c60d:ae5af280:697c2a6e
ARRAY /dev/md2 level=raid6 num-devices=6 metadata=01.02 name=:pool1 UUID=52544974:24687624:d1188992:95f07e6b

# This file was auto-generated on Fri, 30 Jul 2010 22:52:29 +0100
# by mkconf $Id$


mdadm --detail /dev/md2

mdadm: metadata format 01.02 unknown, ignored.
/dev/md2:
Version : 01.02
Creation Time : Sun Aug 8 22:50:41 2010
Raid Level : raid6
Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
Raid Devices : 6
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Tue Jan 1 18:39:38 2013
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Chunk Size : 512K

Name : :pool1
UUID : 52544974:24687624:d1188992:95f07e6b
Events : 541811

Number Major Minor RaidDevice State
0 8 113 0 active sync /dev/sdh1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 0 0 5 removed


What it does mean removed? It is possible that SATA controller or disks are bad for example for problem with bad sectors at disks? But I see all disks at palimpsest.

Many thanks for some help.

smallpond 01-14-2013 10:41 AM

Do
Code:

cat /proc/partitions
to list all of the disks and partitions in your system. Try to identify the two missing partitions from /dev/md2. If they aren't present in the list, maybe you have a hardware problem like a cable not plugged in. If they are there but not showing as RAID devices, maybe they were somehow overwritten.

Tomyx 01-14-2013 01:30 PM

Thanks for you reply.
Output from cat /proc/partitions is there:

major minor #blocks name

8 0 1953514584 sda
8 1 1953512001 sda1
8 16 1953514584 sdb
8 17 1953512001 sdb1
8 32 58615704 sdc
8 33 50780160 sdc1
8 34 7833600 sdc2
9 0 50780096 md0
9 1 7833536 md1
8 48 58615704 sdd
8 49 50780160 sdd1
8 50 7833600 sdd2
8 64 1953514584 sde
8 65 1953512001 sde1
8 80 1953514584 sdf
8 81 1953512001 sdf1
8 96 1953514584 sdg
8 97 1953512001 sdg1
8 112 1953514584 sdh
8 113 1953512001 sdh1


All partitions from raid are available.
Does anybody have some idea?

Envite 01-21-2013 07:08 PM

Are you sure you set up this raid6 with 4 disks?

Besides, please try to STOP the array with mdadm and then to reassemble it:

Code:

mdadm --stop /dev/md2
mdadm assemble --scan /dev/md2



All times are GMT -5. The time now is 04:02 AM.