LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   mdadm : mount = specify filesystem or bad superblock error (https://www.linuxquestions.org/questions/linux-software-2/mdadm-mount-%3D-specify-filesystem-or-bad-superblock-error-641632/)

ron7000 05-12-2008 08:03 AM

mdadm : mount = specify filesystem or bad superblock error
 
hi, i need some desperate help with mdadm.
It is version 1.5, under redhat enterprise linux 3.
had /data directory mounted, i.e. mount /dev/md0 /data
and the machine has been running fine until the system froze a couple nights ago.
No users could log in and those who were logged in could not type anything, get a prompt, etc.
So we rebooted the machine, and it hung during the reboot process on "locating mouse services".
After no solution to this, we got a new hard drive and loaded suse linux enterprise 10 sp1 on it and
got the machine running.
At this point, I see all my raid5 disks in the partitioner table, so I am happy.
I try to reassemble the array with mdadm -A /dev/md0 /dev/sdd ... /dev/sdi
and I get the error: you must specify the filesystem type, bad option, bad superblock on /dev/md0. Missing codepage or other error.

I have tried to reassemble the array through the partitioner and get the same error.
I have tried specifying the filesystem with mount -t at which point it always tells me wrong fs type, bad option, bad superblock.

I am fairly certain the fs type is XFS.

We connected the raid disks to another machine running redhat 3, hoping SLES10 might have had something to do with it. But running on the exact same machine as the original, and on the exact same operating system, I still get the same error.

Up until late friday, I was always able to kick off mdadm -A. Now that is giving me errors.
But I was never able to mount /dev/md0.
Any ideas?

I've tried mdadm --detail and --examine and things look normal, I have 6 disks with 1 spare, raid 5, persistent superblock, status says clean and checksums had been off but now are all correct.

How can I just get the thing to mount so I can copy data off it, that's all I want to do?

Pearlseattle 05-13-2008 01:00 PM

When you try to reassemble the RAID with "mdadm -A /dev/md0 /dev/sdd ... /dev/sdi", what does "cat /proc/mdstat" say? Are all HDDs up or are some of them marked as fail?
:study:

Pearlseattle 05-13-2008 03:20 PM

This is e.g. the output I get:
Code:

MYSRV ~ # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid5 sdd1[2] sdc1[0] sdb1[1] sda1[3]
      732587712 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
     
unused devices: <none>
MYSRV ~ #

As you can see all 4 HDDs of my RAID5 are marked being up ("U").

ron7000 05-14-2008 02:48 PM

yes I can reassemble the array and mdstat shows everything ok.

I've also smartctl the drives and they are ok.

I've found the problem as having an efi partition table written over the primary superblock of the raid, I think this happened when i installed sles 10 on a new system disk with the raid still attached... that's only guess.

I ran xfs_repair once to try and restore the primary superblock but it wants me to mount and umount the filesystem to replay the log, which i can't.

so now i am stuck with risking xfs_repair -L to destroy the log and take my changes of file corruption,
or find a way to send out the drives to a filesystem recovery or raid recovery business which I've been told by a few that they can restore the superblock manually.

at this point I am trying to clone each hard drive to have a backup before I try xfs_repair -L.


All times are GMT -5. The time now is 08:39 AM.