LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 09-06-2012, 06:09 AM   #1
d3berger
LQ Newbie
 
Registered: Sep 2012
Posts: 5

Rep: Reputation: Disabled
mdadm superblock mount issue


Hello, I have been looking around for answers to this issue and I think I might have found some but I wanted to confirm so here we are.

I am using mdadm. I had a raid 1 of 2 2TB drives that I converted to a raid 5 of 3 2TB drives. the raid 1 was assigned as /dev/md0 and it was mounted at /mnt/2tb. It was using /dev/sdb1 and /dev/sdd1. /dev/sdc1 is the new drive that I added. Here are the commands I used to convert:

Code:
mdadm /dev/md0 --fail /dev/sdd1
mdadm --detail /dev/md0
mdadm /dev/md0 --remove /dev/sdd1
mdadm --create /dev/md1 --level 5 --raid-devices 2 /dev/sdc1 /dev/sdd1
mkfs -t ext3 /dev/md1
mkdir /mnt/4tb
mount /dev/md1 /mnt/4tb/
cp -r /mnt/2tb/* /mnt/4tb/ &
umount /dev/md0
mdadm --stop /dev/md0
umount /dev/md1
mdadm /dev/md1 --add /dev/sdb1
mdadm /dev/md1 --grow --raid-devices 3
e2fsck -f /dev/md1
resize2fs /dev/md1
mount /dev/md1 /mnt/4tb/
This process worked great and my raid 5 was built and was usable for a few weeks. Then I had to reboot the server due to a power failure. The server is on a UPS but I shut it down after the power outage went on for about 10 minutes.

When the server came back up I got warnings saying that my raid array could not be mounted and that the superblocks are bad. Here is the log output:

Code:
Log of fsck -C -R -A -a
Wed Sep  5 02:06:37 2012

fsck 1.41.3 (12-Oct-2008)
fsck.ext3: Invalid argument while trying to open /dev/md1^M
/dev/md1:
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>

fsck died with exit status 8

Wed Sep  5 02:06:37 2012
The following is some information about my raid array. The array seems to be in working order but I am unable to mount it.

Code:
nf7s:~# fdisk -l

Disk /dev/sda: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x2ea524e0

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        8709    69955011   83  Linux
/dev/sda2            8710        9039     2650725    5  Extended
/dev/sda5            8710        9039     2650693+  82  Linux swap / Solaris

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x45f908f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243191  1953430528   fd  Linux raid autodetect

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders
Units = cylinders of 5103 * 512 = 2612736 bytes
Disk identifier: 0x000d9887

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      765634  1953513560   fd  Linux raid autodetect

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x45f908f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      243191  1953430528   fd  Linux raid autodetect

Disk /dev/md1: 4000.6 GB, 4000625590272 bytes
2 heads, 4 sectors/track, 976715232 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table
Code:
nf7s:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md1 : active raid5 sdb1[0] sdd1[2] sdc[1]
      3906860928 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
Code:
nf7s:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Mon Sep  3 18:11:14 2012
     Raid Level : raid5
     Array Size : 3906860928 (3725.87 GiB 4000.63 GB)
  Used Dev Size : 1953430464 (1862.94 GiB 2000.31 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Wed Sep  5 02:52:14 2012
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 911d722b:4e787355:6cff238c:ac4624a3 (local to host nf7s)
         Events : 0.10

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       32        1      active sync   /dev/sdc
       2       8       49        2      active sync   /dev/sdd1
Code:
nf7s:~# mdadm --examine /dev/sdb1
/dev/sdb1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 911d722b:4e787355:6cff238c:ac4624a3 (local to host nf7s)
  Creation Time : Mon Sep  3 18:11:14 2012
     Raid Level : raid5
  Used Dev Size : 1953430464 (1862.94 GiB 2000.31 GB)
     Array Size : 3906860928 (3725.87 GiB 4000.63 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

    Update Time : Thu Sep  6 02:46:35 2012
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
       Checksum : b703eb6f - correct
         Events : 10

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     0       8       17        0      active sync   /dev/sdb1

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       49        2      active sync   /dev/sdd1
Code:
nf7s:~# mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 911d722b:4e787355:6cff238c:ac4624a3 (local to host nf7s)
  Creation Time : Mon Sep  3 18:11:14 2012
     Raid Level : raid5
  Used Dev Size : 1953430464 (1862.94 GiB 2000.31 GB)
     Array Size : 3906860928 (3725.87 GiB 4000.63 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

    Update Time : Thu Sep  6 02:46:35 2012
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
       Checksum : b703eb80 - correct
         Events : 10

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     1       8       32        1      active sync   /dev/sdc

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       49        2      active sync   /dev/sdd1
Code:
nf7s:~# mdadm --examine /dev/sdd1
/dev/sdd1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 911d722b:4e787355:6cff238c:ac4624a3 (local to host nf7s)
  Creation Time : Mon Sep  3 18:11:14 2012
     Raid Level : raid5
  Used Dev Size : 1953430464 (1862.94 GiB 2000.31 GB)
     Array Size : 3906860928 (3725.87 GiB 4000.63 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1

    Update Time : Thu Sep  6 02:46:35 2012
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0
       Checksum : b703eb93 - correct
         Events : 10

         Layout : left-symmetric
     Chunk Size : 64K

      Number   Major   Minor   RaidDevice State
this     2       8       49        2      active sync   /dev/sdd1

   0     0       8       17        0      active sync   /dev/sdb1
   1     1       8       32        1      active sync   /dev/sdc
   2     2       8       49        2      active sync   /dev/sdd1
Code:
nf7s:~# mount /dev/md1 /mnt/4tb
mount: unknown filesystem type 'ext4'
nf7s:~# mount -t ext3 /dev/md1 /mnt/4tb
mount: wrong fs type, bad option, bad superblock on /dev/md1,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

nf7s:~# dmesg | tail
[  784.543511] raid5: device sdb1 operational as raid disk 0
[  784.543519] raid5: device sdd1 operational as raid disk 2
[  784.543525] raid5: device sdc operational as raid disk 1
[  784.544226] raid5: allocated 3170kB for md1
[  784.544233] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2
[  784.544237] RAID5 conf printout:
[  784.544241]  --- rd:3 wd:3
[  784.544245]  disk 0, o:1, dev:sdb1
[  784.544250]  disk 1, o:1, dev:sdc
[  784.544254]  disk 2, o:1, dev:sdd1
[88959.568808] EXT3-fs: md1: couldn't mount because of unsupported optional features (3d18000).
The only weird thing I see is that disk 1 of my raid says its using /dev/sdc instead of /dev/sdc1 does that make any difference?

The 2 older drives I was using in raid 1 are Western digital WD20EARS while the new one I added is WD20EARX (the advance format type) Does that make a difference and could be causing the issue?

The other thing I wanted to ask was that I never zeroed the superblock of the 2 drives when I went from raid 1 to raid 5. Do you think that this is the cause of the issue? Am I safe to zero the superblocks and then recreate the array?

Is this a problem with the filesystem and should I run mkfs -t ext3 /dev/md1 ?

I have some backups of the data as I wasn't sure the raid 1 to raid 5 would succeed but I would prefer to recover it.

Thank you so much for reading this and any information you may have.
 
Old 09-06-2012, 10:40 AM   #2
FizzerJE
LQ Newbie
 
Registered: Dec 2004
Posts: 12

Rep: Reputation: 1
Don't quote me a definite Linux noobie here.
( Any way if i post a wrong answer, it will probably spur someone to post a correct one).

/dev/sdc instead of /dev/sdc1

I believe that will put the raid superblock in a different location causing not to save.
I say this because my first dabbling into software raid I was using the whole disk and not the the whole disk partitions, the raid never worked. On reboot it was not there...

I have converted many 3 RAID 1 to 5 successfully. BUT I did not do it the way you have.

If your not bothered about loosing the data due to having backups.

Have you tried to failing the new drive and starting the raid set degraded.

Don't touch the FS yet.. this is a mdadm issue.
 
Old 09-06-2012, 10:53 AM   #3
FizzerJE
LQ Newbie
 
Registered: Dec 2004
Posts: 12

Rep: Reputation: 1
I have found my notes on RAID 1 to 5

I remember, as my RAID is a bit older I use metadata type 0.9 AS DO YOU... so I have to be careful to specify this as the newer type save the metadata to a different location, thus hosing the filesystem.
I believe 1 and 1.1 save the super-block at the beginning where 0.9 saves it near the end. So using a later type will hose the FS super-block. So bear that in mind also.
You did ask for ANY info..

mdadm --stop /dev/md0

mdadm --create /dev/md0 --level=5 –metadata=0.9 --raid-devices=3 /dev/sda1 /dev/sdb1 missing

mdadm --grow --bitmap=internal /dev/md0

mdadm --manage /dev/md0 --add /dev/sdc1 << new drive...

That's what I did for 3 RAID sets from 1 to 5 so far.


Can't guarantee anything


I would go for failing the /dev/sdc device. That is definitely incorrect.

See if you can start the array.
If not recreate with the newer device missing making sure to use the same metadata and chunk size (see my problem post)

ENSURE you use --assume-clean We DO NOT want any rebuilding going on YET!!!

Add new device - sdc1 this time

All could lead to data loss.

No guarantees.

Last edited by FizzerJE; 09-06-2012 at 10:55 AM.
 
1 members found this post helpful.
Old 09-06-2012, 12:31 PM   #4
d3berger
LQ Newbie
 
Registered: Sep 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
Thanks for the reply FizzerJE.

I think I will try

Code:
mdadm /dev/md1 --fail /dev/sdc
mdadm /dev/md1 --remove /dev/sdc
mdadm /dev/md1 --zero-superblock /dev/sdc
mdadm /dev/md1 --zero-superblock /dev/sdc1 # might not work but ill try it
mdadm /dev/md1 --add /dev/sdc1
Does that make sense?

Last edited by d3berger; 09-06-2012 at 09:55 PM.
 
Old 09-06-2012, 09:56 PM   #5
d3berger
LQ Newbie
 
Registered: Sep 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
I ran the code in my last post but before I added the drive back I went into fdisk and remade the partitions. Then I did the final line to add /dev/sdc1 back. It is now rebuilding. When it is done I will try to reboot and see if it mounts.
 
Old 09-07-2012, 08:06 PM   #6
d3berger
LQ Newbie
 
Registered: Sep 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
I rebooted after it finished rebuilding. It will still not mount. Same error as before:

Code:
/dev/md1:
The superblock could not be read or does not describe a correct ext2
filesystem.  If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
On the plus side my fdisk and raid details now look better:

Code:
nf7s:~# fdisk -l

Disk /dev/sda: 74.3 GB, 74355769344 bytes
255 heads, 63 sectors/track, 9039 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x2ea524e0

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        8709    69955011   83  Linux
/dev/sda2            8710        9039     2650725    5  Extended
/dev/sda5            8710        9039     2650693+  82  Linux swap / Solaris

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x45f908f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      243191  1953430528   fd  Linux raid autodetect

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xdc86611c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      243201  1953512001   fd  Linux raid autodetect

Disk /dev/sdd: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x45f908f8

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1      243191  1953430528   fd  Linux raid autodetect

Disk /dev/md1: 4000.6 GB, 4000625590272 bytes
2 heads, 4 sectors/track, 976715232 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn't contain a valid partition table
Code:
nf7s:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 00.90
  Creation Time : Mon Sep  3 18:11:14 2012
     Raid Level : raid5
     Array Size : 3906860928 (3725.87 GiB 4000.63 GB)
  Used Dev Size : 1953430464 (1862.94 GiB 2000.31 GB)
   Raid Devices : 3
  Total Devices : 3
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Sep  7 16:46:58 2012
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           UUID : 911d722b:4e787355:6cff238c:ac4624a3 (local to host nf7s)
         Events : 0.26

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       2       8       49        2      active sync   /dev/sdd1
This must be an issue with the filesystem as the raid looks fine now.

Does anyone know if I should try the command in the error log?

Code:
e2fsck -b 8193 /dev/md1
Would this do anything? Or could it damage the filesystem?
 
Old 09-11-2012, 08:36 AM   #7
d3berger
LQ Newbie
 
Registered: Sep 2012
Posts: 5

Original Poster
Rep: Reputation: Disabled
I tried to use a program called testdisk but after a long time searching it could not find a fixable filesystem on /dev/md1. Does anyone have any tips on using this program? Does anyone know of a way to maybe copy the data from the raid to a spare disk? Any other way to recover the data or the fix the filesystem?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Problems with mdadm array superblock seaking1 Linux - Hardware 1 01-19-2009 10:24 PM
MDADM screws up the LVM partitions? Can't mount the LVMs after mdadm stopped alirezan1 Linux - Newbie 3 11-18-2008 04:42 PM
mdadm: no md superblock shamgar03 Linux - Server 6 08-01-2008 12:20 PM
mdadm : mount = specify filesystem or bad superblock error ron7000 Linux - Software 3 05-14-2008 02:48 PM
mdadm: cannot examine md device because there is superblock hamish Linux - Software 6 08-29-2006 12:53 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 07:17 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration