LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 03-17-2014, 04:00 PM   #1
kevinsmith42
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Rep: Reputation: Disabled
Unhappy RAID5 array missing and wont assemble


I have a R-Pi running XBMC (Linux CaribNAS 3.10.24 #2 PREEMPT Mon Dec 23 05:18:12 UTC 2013 armv6l GNU/Linux) with a 3 disk raid 5 array that has been working up till yesterday when is stopped running overnight and now will now not start, I have a lot of data that is not backed up so would like to recover without data loss if poss.

I have noticed that mdadm --examine --scan shows two /dev/md/0 arrays, only the second one is correct, but I cant get rid of the first, but I don't think this is the cause but would like to tidy things up.

dmesg |grep md:
md: md0 stopped.
md: sdc1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22
md: sdd1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22
md: sdb1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22


mdadm -D /dev/md0
mdadm: md device /dev/md0 does not appear to be active.


mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=c20f8d11:b81ae9b6:1b318343:ed35987e name=raspbmc:0
ARRAY /dev/md/0 metadata=1.2 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c name=CaribNAS:0


cat /etc/mdadm/mdadm.conf
DEVICE partitions
HOMEHOST CaribNAS
MAILADDR CaribNAS@gmail.com
ARRAY /dev/md0 metadata=1.2 Name=CaribNAS:0 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c

cat /etc/fstab
proc /proc proc defaults 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/sda1 / ext4 defaults 0 0
/dev/sda2 /temp ext4 defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults,noatime 0 0
#/dev/mmcblk0p2 / f2fs defaults,noatime 0 0
/dev/md0 /RAID5 auto defaults 0 3


cat /proc/mdstat
Personalities :
unused devices: <none>

mdadm --stop /dev/md0
mdadm: stopped /dev/md0

mdadm --assemble --scan -v
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdd has wrong uuid.
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb has wrong uuid.
mdadm: no RAID superblock on /dev/sda2
mdadm: no RAID superblock on /dev/sda1
mdadm: no RAID superblock on /dev/sda
mdadm: no RAID superblock on /dev/mmcblk0p2
mdadm: no RAID superblock on /dev/mmcblk0p1
mdadm: no RAID superblock on /dev/mmcblk0
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: failed to add /dev/sdc1 to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdd1 to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdb1 to /dev/md0: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument

mdadm --examine /dev/sd[bcd]1
mdadm --examine /dev/sd[bcd]1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8d6ee8dc:1d62a673:74189894:14192b98

Update Time : Sat Mar 15 15:46:41 2014
Checksum : 2cd2a9ae - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2f8065d8:ff3a060d:4883c25f:12f543f7

Update Time : Sat Mar 15 15:46:41 2014
Checksum : ebf2db04 - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 38c6ce39:0ad01a8c:d188672b:4b06661f

Update Time : Sat Mar 15 15:46:41 2014
Checksum : c037ccdb - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing)

Last edited by kevinsmith42; 03-17-2014 at 04:02 PM.
 
Old 03-17-2014, 04:20 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,609

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
Check if there is some old data in /etc/mdadm.conf. That may be the source of the extra array.
 
Old 03-18-2014, 02:03 AM   #3
kevinsmith42
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
I don't have a /etc/mdadm.conf, the one in /etc/mdadm/mdadm.conf only has the correct array defined.

cat /etc/mdadm/mdadm.conf
DEVICE partitions
HOMEHOST CaribNAS
MAILADDR CaribNAS@gmail.com
ARRAY /dev/md0 metadata=1.2 Name=CaribNAS:0 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c
 
Old 03-18-2014, 10:13 AM   #4
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,609

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
It is possible to create whole-drive RAID on sda, sdb, sdc, but not recommended since then there are no unique disk labels. It's better to create RAID on partitions sda1, sdb1, sdc1. Your system has somehow had both applied. Whichever one was written second has messed up the superblocks of the first.
 
Old 03-18-2014, 10:37 AM   #5
kevinsmith42
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
I created the array using the partitions rather than the disks using the command:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1

This has been working with no problems since mid Jan, suddenly something has changed and broken it, the partitions show an Update Time : Sat Mar 15 15:46:41 2014 - what does this mean ?

Can I recover the array without loosing data ?
 
Old 03-18-2014, 11:34 AM   #6
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,609

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
I'm basing my guess on this:

Code:
mdadm: /dev/sdd has wrong uuid.
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb has wrong uuid.
It looks like it sees a superblock, but wrong ID. Or maybe the message is just misleading me.
Try listing all raid components:

Code:
mdadm --examine --scan -v
 
Old 03-19-2014, 01:58 AM   #7
kevinsmith42
LQ Newbie
 
Registered: Mar 2014
Posts: 4

Original Poster
Rep: Reputation: Disabled
The UUID agrees with the Array UUID reported on the partitions, it was the second array which is the one that was working, the first one I created before finding out that it should be created using the partitions rather than disks.

mdadm --examine --scan -v
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=3 UUID=c20f8d11:b81ae9b6:1b318343:ed35987e name=raspbmc:0
devices=/dev/sdd,/dev/sdc,/dev/sdb
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=3 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c name=CaribNAS:0
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1
 
Old 03-19-2014, 04:31 PM   #8
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 (pre-systemd)
Posts: 2,609

Rep: Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702Reputation: 702
*** WARNING - SEVERE TIRE DAMAGE ***
If you want to get rid of it, you can overwrite block 8-? of the 3 disks, which is where v 1.2 md superblock is located. Hopefully, partition 1 starts much later.
Code:
mdadm --zero-superblock /dev/sdb
mdadm --zero-superblock /dev/sdc
mdadm --zero-superblock /dev/sdd
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
mdadm assemble raid5 with loopback mounted disk images Voltar Linux - Server 3 11-04-2015 01:50 PM
Re-assemble RAID 5 array scottastanley Linux - Server 9 01-08-2013 12:44 AM
I want to know mdadm how to assemble a RAID5? Do you have any document ? xytmbj72d Linux - Software 1 08-22-2012 05:56 AM
Cant assemble RAID5 with mdadm zac_haryy Linux - Server 3 06-16-2011 03:08 PM


All times are GMT -5. The time now is 09:01 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration