LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 08-27-2017, 01:10 PM   #1
mitchd123
LQ Newbie
 
Registered: Jan 2009
Distribution: Ubuntu 14.04
Posts: 21

Rep: Reputation: 0
Raid recovery assistance


Wondering what to try next..other than restore.

Problem: Array is showing an unknown file system. I'm concerned the array assembled in the wrong order or something?

sudo mount /dev/md0 /mnt/raid/

mount: wrong fs type, bad option, bad superblock on /dev/md0,
missing codepage or helper program, or other error

In some cases useful info is found in syslog - try
dmesg | tail or so.

---------------------------------------
dmesg (attached) raid starts at 14.x

history:
I had upgraded a larger boot drive (/dev/sdd) and had problems reloading grub. Gave up and reinstalled the OS to a new boot drive using LVM. This boot drive is NOT part of the raid array. I copied the entire OS from the old boot drive to the new boot drive and, booted the old install with a new kernel. Raid modules didn't load due to different kernel modules. I updated to the correct kernel level, but was told my raid array was missing a superblock. I checked the drives and they were all consistent, showing same time, etc. The boot drive has nothing to do with the 3 drive raid 5 array.

I ran the following command on the 3 drive raid 5 array. (all drives good)

mdadm --stop /dev/md0
mdadm --assemble --force /dev/md0 /dev/sdc1 /dev/sde1 /dev/sdf1

Array went into a sync which I allowed to complete. Now the array won't mount. Raid modules are loading.



---------------------------------
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[0] sde1[1] sdf1[3]
7801143296 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/30 pages [0KB], 65536KB chunk

unused devices: <none>

---------------------------------------------------

mdadm -E /dev/sdc1 /dev/sde1 /dev/sdf1
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 964fd303:748b8f0e:58f384ce:3fde97cc
Name : virtual:0 (local to host virtual)
Creation Time : Sat Aug 26 16:21:20 2017
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 7801143296 (3719.88 GiB 3994.19 GB)
Array Size : 7801143296 (7439.75 GiB 7988.37 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 77580f12:adf88476:d9c1448b:b041443f

Internal Bitmap : 8 sectors from superblock
Update Time : Sun Aug 27 04:10:50 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : adb58f4c - correct
Events : 6257

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sde1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 964fd303:748b8f0e:58f384ce:3fde97cc
Name : virtual:0 (local to host virtual)
Creation Time : Sat Aug 26 16:21:20 2017
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 7801143296 (3719.88 GiB 3994.19 GB)
Array Size : 7801143296 (7439.75 GiB 7988.37 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : 1cb11ccc:2aeb095d:6be1838b:7d7b33b7

Internal Bitmap : 8 sectors from superblock
Update Time : Sun Aug 27 04:10:50 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : c67633cf - correct
Events : 6257

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 1
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdf1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 964fd303:748b8f0e:58f384ce:3fde97cc
Name : virtual:0 (local to host virtual)
Creation Time : Sat Aug 26 16:21:20 2017
Raid Level : raid5
Raid Devices : 3

Avail Dev Size : 7801143296 (3719.88 GiB 3994.19 GB)
Array Size : 7801143296 (7439.75 GiB 7988.37 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262056 sectors, after=0 sectors
State : clean
Device UUID : c322a4fb:ec99f835:0ce9cb45:ce6b376d

Internal Bitmap : 8 sectors from superblock
Update Time : Sun Aug 27 04:10:50 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : 3f384c2c - correct
Events : 6257

Layout : left-symmetric
Chunk Size : 512K

Device Role : Active device 2
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)

--------------------------------

fsck.ext4 /dev/md0
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/md0

The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>

-------------------

# gdisk -l /dev/sdc
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7813971633 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 310D342C-B691-49D2-B485-F2E9706173A8
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7813971599
Partitions will be aligned on 2048-sector boundaries
Total free space is 12566126 sectors (6.0 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 7801407487 3.6 TiB FD00 Linux RAID

# gdisk -l /dev/sde
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sde: 7813971633 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 310D342C-B691-49D2-B485-F2E9706173A8
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7813971599
Partitions will be aligned on 2048-sector boundaries
Total free space is 12566126 sectors (6.0 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 7801407487 3.6 TiB FD00 Linux RAID
# gdisk -l /dev/sdf
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdf: 7813971633 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 310D342C-B691-49D2-B485-F2E9706173A8
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7813971599
Partitions will be aligned on 2048-sector boundaries
Total free space is 12566126 sectors (6.0 GiB)

Number Start (sector) End (sector) Size Code Name
1 2048 7801407487 3.6 TiB FD00 Linux RAID

------------------------------
gdisk -l /dev/md0
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present

Creating new GPT entries.
Disk /dev/md0: 15602286592 sectors, 7.3 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 58E895E9-019A-41C3-934A-0A65BC4BBD96
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 15602286558
Partitions will be aligned on 2048-sector boundaries
Total free space is 15602286525 sectors (7.3 TiB)

Number Start (sector) End (sector) Size Code Name



-----------------------------------

Disks are all good via SmartDrv.
Attached Files
File Type: txt dmesg.txt (59.6 KB, 7 views)
 
Old 08-27-2017, 06:23 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora 33
Posts: 3,598

Rep: Reputation: 1027Reputation: 1027Reputation: 1027Reputation: 1027Reputation: 1027Reputation: 1027Reputation: 1027Reputation: 1027
Quote:
I'm concerned the array assembled in the wrong order or something?
The superblock data on each drive sets its position in the array to prevent this.

Quote:
Raid modules didn't load due to different kernel modules.
Although newer formats have been added, I don't think any older formats have ever been dropped, so a newer kernel can always assemble older RAID sets.

It is more likely that a disk was inadvertently overwritten when updating the OS. The resync would then corrupt the data on the other drives.
 
  


Reply

Tags
corrupt, mdadm, raid5, superblock


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
RAID 1 Recovery Help MtnBikeMike Linux - General 3 12-02-2014 10:05 PM
[SOLVED] Need help with Linux software RAID 5 / RAID 6 and LVM disaster recovery Nogitsune Linux - General 2 10-15-2014 03:12 AM
Broken raid 5 (11 drives in mdadm) -- data recovery/raid reconstruction needed -- ple jml48197 Linux - Server 4 07-27-2010 12:57 PM
[SOLVED] Hardware RAID vs Software RAID and DATA RECOVERY bskrakes Linux - General 7 07-04-2008 01:09 PM
RAID Recovery.... PLEASE HELP!!! kburgess Linux - Newbie 1 04-18-2004 01:04 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 03:09 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration