Hi all,
I am newbie at mdadm. We have computer with raid6 used as data pool mounted at Ubuntu. Everything goes fine with raid, until two weeks before computer crashed and mounted pool isn't available now.
When I restart the machine, it gets to a point where it says that "/mnt/pool is not present or not available". At this point I have pressed "S" and started up Ubuntu anyway. If I run palimpsest I see all disks and RAID, but the raid with /mnt/pool
had state: "Partially assembled, not running". I also see disks from raid, they have SMART status: "Disk is healthy" or "Disk has a few bad sectors".
Anyway I read many posts about this, I tried to stop raid via palimpsest and start it again, but now I get error:
Error assembling array: mdadm exited with exit code 1: mdadm:
metadata format 01.02 unknown, ignored.
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: /dev/sdg1 has no superblock - assembly aborted
I have importaint data at raid, so I can't experiment with this and I am finding some help.
Some information about system and raid:
system:
Ubuntu 10.04, 2.6.32-33-generic
/proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : inactive sdh1[0] sdf1[3] sdg1[2] sde1[1]
7814047460 blocks super 1.2
md1 : active raid1 sdd2[1] sdc2[0]
7833536 blocks [2/2] [UU]
md0 : active raid1 sdd1[1] sdc1[0]
50780096 blocks [2/2] [UU]
unused devices: <none>
/etc/mdadm/mdadm.conf:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=3ac84499:ba962435:4d0ac48f:0dedaf16
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=744161aa:ba92c60d:ae5af280:697c2a6e
ARRAY /dev/md2 level=raid6 num-devices=6 metadata=01.02 name=ool1 UUID=52544974:24687624:d1188992:95f07e6b
# This file was auto-generated on Fri, 30 Jul 2010 22:52:29 +0100
# by mkconf $Id$
mdadm --detail /dev/md2
mdadm: metadata format 01.02 unknown, ignored.
/dev/md2:
Version : 01.02
Creation Time : Sun Aug 8 22:50:41 2010
Raid Level : raid6
Used Dev Size : 1953511424 (1863.01 GiB 2000.40 GB)
Raid Devices : 6
Total Devices : 4
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Tue Jan 1 18:39:38 2013
State : active, degraded, Not Started
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Chunk Size : 512K
Name : ool1
UUID : 52544974:24687624:d1188992:95f07e6b
Events : 541811
Number Major Minor RaidDevice State
0 8 113 0 active sync /dev/sdh1
1 8 65 1 active sync /dev/sde1
2 8 97 2 active sync /dev/sdg1
3 8 81 3 active sync /dev/sdf1
4 0 0 4 removed
5 0 0 5 removed
What it does mean removed? It is possible that SATA controller or disks are bad for example for problem with bad sectors at disks? But I see all disks at palimpsest.
Many thanks for some help.