LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (http://www.linuxquestions.org/questions/linux-software-2/)
-   -   mdadm RAID 5 assemble (http://www.linuxquestions.org/questions/linux-software-2/mdadm-raid-5-assemble-919232/)

sag47 12-17-2011 01:55 PM

mdadm RAID 5 assemble
 
Code:

[root@stealth configs]# cat /etc/issue
Fedora release 16 (Verne)
Kernel \r on an \m (\l)

Code:

[root@stealth configs]# uname -a
Linux stealth.home 3.1.5-2.fc16.x86_64 #1 SMP Mon Dec 12 21:25:51 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

Okay. So I recently had to rebuild a new system. I have moved my RAID array over but I can't seem to get it to assemble. The auto assemble/scan tuturials that have spammed the internet do not work. I have all the information I need I just don't know mdadm well enough to rebuild my RAID5 Array.

Here's what I know...

It was originally built with the following relevant options
Code:

/dev/md0 was the path
/dev/md0 was formatted to ext4 with mkfs.ext4 -L "Secure Backup" /dev/md0
--level=raid5
--chunk=128
--raid-devices=3

Here is the /etc/mdadm.conf from the old system which was generated by running mdadm --detail --scan >> /etc/mdadm.conf.
Code:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 metadata=1.2 name=stealth.home:Video UUID=124bc3bf:6428cd76:578b1a37:0c0419e6

The fstab entry for my RAID array on the old system was...
Code:

/dev/md0 /media/raid                                  ext4    defaults        1 2
I have attempted to rebuild it from steps outlined in a few places including the following command.
Code:

[root@stealth configs]# mdadm --assemble /dev/md0 --auto=yes --scan --update=summaries --verbose
mdadm: looking for devices for /dev/md0
mdadm: no RAID superblock on /dev/sdg1
mdadm: no RAID superblock on /dev/sde1
mdadm: no RAID superblock on /dev/sdd1
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: cannot open device /dev/sdb1: Device or resource busy
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: cannot open device /dev/dm-2: Device or resource busy
mdadm: cannot open device /dev/dm-1: Device or resource busy
mdadm: cannot open device /dev/dm-0: Device or resource busy
mdadm: cannot open device /dev/sda3: Device or resource busy
mdadm: cannot open device /dev/sda2: Device or resource busy
mdadm: no RAID superblock on /dev/sda1
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: --update=summaries not understood for 1.x metadata

Please assist me in rebuilding this array. I thought about doing the mdadm --create /dev/md0 --options /dev/sd[de] missing outlined in this article but that's pretty destructive for what should be a normally working array.

sag47 12-17-2011 06:13 PM

Well I was in the #linux-raid IRC channel on freenode and, although I don't have a solution yet, I do have more troubleshooting information I can post.

Code:

# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

Code:

[root@stealth media]# mdadm -A /dev/md0 /dev/sd[deg]1
mdadm: no recogniseable superblock on /dev/sdd1
mdadm: /dev/sdd1 has no superblock - assembly aborted

Code:

[root@stealth media]# mdadm --examine /dev/sdd1
mdadm: No md superblock detected on /dev/sdd1.
[root@stealth media]# mdadm --examine /dev/sdg1
mdadm: No md superblock detected on /dev/sdg1.
[root@stealth media]# mdadm --examine /dev/sde1
mdadm: No md superblock detected on /dev/sde1.

Here is some fdisk -l output. I only have 3x 1TB drives so it's easy to tell which devices I need to use as part of the raid array. Also each of them are properly designated as raid autodetect.

Code:

Disk /dev/sda: 128.0 GB, 128035676160 bytes
255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

  Device Boot      Start        End      Blocks  Id  System
/dev/sda1              1  250069679  125034839+  ee  GPT

Disk /dev/mapper/vg_stealth-lv_swap: 10.4 GB, 10435428352 bytes
255 heads, 63 sectors/track, 1268 cylinders, total 20381696 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_stealth-lv_root: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_stealth-lv_home: 63.4 GB, 63384322048 bytes
255 heads, 63 sectors/track, 7706 cylinders, total 123797504 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0001323b

  Device Boot      Start        End      Blocks  Id  System
/dev/sdb1            2048  976773119  488385536  83  Linux

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00036944

  Device Boot      Start        End      Blocks  Id  System
/dev/sdc1            2048  3907028991  1953513472  83  Linux

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000183e8

  Device Boot      Start        End      Blocks  Id  System
/dev/sdd1            2048  1953523711  976760832  fd  Linux raid autodetect

Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000f3808

  Device Boot      Start        End      Blocks  Id  System
/dev/sde1            2048  1953523711  976760832  fd  Linux raid autodetect

Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00094012

  Device Boot      Start        End      Blocks  Id  System
/dev/sdg1            2048  1953523711  976760832  fd  Linux raid autodetect

So far since I've run out of ideas on the software side I bought a SAS card to replace the SI3132 eSata multi HBA card I have which came with my TR-5M. I'm hoping with a more quality and reliable controller that my issue will resolve.

If someone has a suggestion for how I can handle those apparently "bad" superblocks then I would like to know. I have been thoroughly troubleshooting this and buying a card is the only significant action I can take since the data on it is important. I have the majority of it duplicated on other drives but there's still some parts which I need off that array.


All times are GMT -5. The time now is 01:17 AM.