LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   RAID5 array missing and wont assemble (https://www.linuxquestions.org/questions/linux-newbie-8/raid5-array-missing-and-wont-assemble-4175498503/)

kevinsmith42 03-17-2014 03:00 PM

RAID5 array missing and wont assemble
 
I have a R-Pi running XBMC (Linux CaribNAS 3.10.24 #2 PREEMPT Mon Dec 23 05:18:12 UTC 2013 armv6l GNU/Linux) with a 3 disk raid 5 array that has been working up till yesterday when is stopped running overnight and now will now not start, I have a lot of data that is not backed up so would like to recover without data loss if poss.

I have noticed that mdadm --examine --scan shows two /dev/md/0 arrays, only the second one is correct, but I cant get rid of the first, but I don't think this is the cause but would like to tidy things up.

dmesg |grep md:
md: md0 stopped.
md: sdc1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22
md: sdd1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22
md: sdb1 does not have a valid v1.2 superblock, not importing!
md: md_import_device returned -22


mdadm -D /dev/md0
mdadm: md device /dev/md0 does not appear to be active.


mdadm --examine --scan
ARRAY /dev/md/0 metadata=1.2 UUID=c20f8d11:b81ae9b6:1b318343:ed35987e name=raspbmc:0
ARRAY /dev/md/0 metadata=1.2 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c name=CaribNAS:0


cat /etc/mdadm/mdadm.conf
DEVICE partitions
HOMEHOST CaribNAS
MAILADDR CaribNAS@gmail.com
ARRAY /dev/md0 metadata=1.2 Name=CaribNAS:0 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c

cat /etc/fstab
proc /proc proc defaults 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0
/dev/sda1 / ext4 defaults 0 0
/dev/sda2 /temp ext4 defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults,noatime 0 0
#/dev/mmcblk0p2 / f2fs defaults,noatime 0 0
/dev/md0 /RAID5 auto defaults 0 3


cat /proc/mdstat
Personalities :
unused devices: <none>

mdadm --stop /dev/md0
mdadm: stopped /dev/md0

mdadm --assemble --scan -v
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdd has wrong uuid.
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb has wrong uuid.
mdadm: no RAID superblock on /dev/sda2
mdadm: no RAID superblock on /dev/sda1
mdadm: no RAID superblock on /dev/sda
mdadm: no RAID superblock on /dev/mmcblk0p2
mdadm: no RAID superblock on /dev/mmcblk0p1
mdadm: no RAID superblock on /dev/mmcblk0
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 0.
mdadm: failed to add /dev/sdc1 to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdd1 to /dev/md0: Invalid argument
mdadm: failed to add /dev/sdb1 to /dev/md0: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument

mdadm --examine /dev/sd[bcd]1
mdadm --examine /dev/sd[bcd]1
/dev/sdb1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 8d6ee8dc:1d62a673:74189894:14192b98

Update Time : Sat Mar 15 15:46:41 2014
Checksum : 2cd2a9ae - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing)
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 2f8065d8:ff3a060d:4883c25f:12f543f7

Update Time : Sat Mar 15 15:46:41 2014
Checksum : ebf2db04 - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing)
/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 8ec7d307:b5609dfd:caea7099:a9d8201c
Name : CaribNAS:0 (local to host CaribNAS)
Creation Time : Thu Jan 16 02:44:56 2014
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 1953260975 (931.39 GiB 1000.07 GB)
Array Size : 1953260544 (1862.77 GiB 2000.14 GB)
Used Dev Size : 1953260544 (931.39 GiB 1000.07 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 38c6ce39:0ad01a8c:d188672b:4b06661f

Update Time : Sat Mar 15 15:46:41 2014
Checksum : c037ccdb - correct
Events : 345

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing)

smallpond 03-17-2014 03:20 PM

Check if there is some old data in /etc/mdadm.conf. That may be the source of the extra array.

kevinsmith42 03-18-2014 01:03 AM

I don't have a /etc/mdadm.conf, the one in /etc/mdadm/mdadm.conf only has the correct array defined.

cat /etc/mdadm/mdadm.conf
DEVICE partitions
HOMEHOST CaribNAS
MAILADDR CaribNAS@gmail.com
ARRAY /dev/md0 metadata=1.2 Name=CaribNAS:0 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c

smallpond 03-18-2014 09:13 AM

It is possible to create whole-drive RAID on sda, sdb, sdc, but not recommended since then there are no unique disk labels. It's better to create RAID on partitions sda1, sdb1, sdc1. Your system has somehow had both applied. Whichever one was written second has messed up the superblocks of the first.

kevinsmith42 03-18-2014 09:37 AM

I created the array using the partitions rather than the disks using the command:
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1

This has been working with no problems since mid Jan, suddenly something has changed and broken it, the partitions show an Update Time : Sat Mar 15 15:46:41 2014 - what does this mean ?

Can I recover the array without loosing data ?

smallpond 03-18-2014 10:34 AM

I'm basing my guess on this:

Code:

mdadm: /dev/sdd has wrong uuid.
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb has wrong uuid.

It looks like it sees a superblock, but wrong ID. Or maybe the message is just misleading me.
Try listing all raid components:

Code:

mdadm --examine --scan -v

kevinsmith42 03-19-2014 12:58 AM

The UUID agrees with the Array UUID reported on the partitions, it was the second array which is the one that was working, the first one I created before finding out that it should be created using the partitions rather than disks.

mdadm --examine --scan -v
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=3 UUID=c20f8d11:b81ae9b6:1b318343:ed35987e name=raspbmc:0
devices=/dev/sdd,/dev/sdc,/dev/sdb
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=3 UUID=8ec7d307:b5609dfd:caea7099:a9d8201c name=CaribNAS:0
devices=/dev/sdd1,/dev/sdc1,/dev/sdb1

smallpond 03-19-2014 03:31 PM

*** WARNING - SEVERE TIRE DAMAGE ***
If you want to get rid of it, you can overwrite block 8-? of the 3 disks, which is where v 1.2 md superblock is located. Hopefully, partition 1 starts much later.
Code:

mdadm --zero-superblock /dev/sdb
mdadm --zero-superblock /dev/sdc
mdadm --zero-superblock /dev/sdd



All times are GMT -5. The time now is 02:55 AM.