mdadm : mount = specify filesystem or bad superblock error
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
mdadm : mount = specify filesystem or bad superblock error
hi, i need some desperate help with mdadm.
It is version 1.5, under redhat enterprise linux 3.
had /data directory mounted, i.e. mount /dev/md0 /data
and the machine has been running fine until the system froze a couple nights ago.
No users could log in and those who were logged in could not type anything, get a prompt, etc.
So we rebooted the machine, and it hung during the reboot process on "locating mouse services".
After no solution to this, we got a new hard drive and loaded suse linux enterprise 10 sp1 on it and
got the machine running.
At this point, I see all my raid5 disks in the partitioner table, so I am happy.
I try to reassemble the array with mdadm -A /dev/md0 /dev/sdd ... /dev/sdi
and I get the error: you must specify the filesystem type, bad option, bad superblock on /dev/md0. Missing codepage or other error.
I have tried to reassemble the array through the partitioner and get the same error.
I have tried specifying the filesystem with mount -t at which point it always tells me wrong fs type, bad option, bad superblock.
I am fairly certain the fs type is XFS.
We connected the raid disks to another machine running redhat 3, hoping SLES10 might have had something to do with it. But running on the exact same machine as the original, and on the exact same operating system, I still get the same error.
Up until late friday, I was always able to kick off mdadm -A. Now that is giving me errors.
But I was never able to mount /dev/md0.
Any ideas?
I've tried mdadm --detail and --examine and things look normal, I have 6 disks with 1 spare, raid 5, persistent superblock, status says clean and checksums had been off but now are all correct.
How can I just get the thing to mount so I can copy data off it, that's all I want to do?
When you try to reassemble the RAID with "mdadm -A /dev/md0 /dev/sdd ... /dev/sdi", what does "cat /proc/mdstat" say? Are all HDDs up or are some of them marked as fail?
yes I can reassemble the array and mdstat shows everything ok.
I've also smartctl the drives and they are ok.
I've found the problem as having an efi partition table written over the primary superblock of the raid, I think this happened when i installed sles 10 on a new system disk with the raid still attached... that's only guess.
I ran xfs_repair once to try and restore the primary superblock but it wants me to mount and umount the filesystem to replay the log, which i can't.
so now i am stuck with risking xfs_repair -L to destroy the log and take my changes of file corruption,
or find a way to send out the drives to a filesystem recovery or raid recovery business which I've been told by a few that they can restore the superblock manually.
at this point I am trying to clone each hard drive to have a backup before I try xfs_repair -L.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.