Raid1 array says /dev/md0 does not have a valid partition table, won't auto mount
UbuntuThis forum is for the discussion of Ubuntu Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Raid1 array says /dev/md0 does not have a valid partition table, won't auto mount
I recently setup a Raid1 software array using mdadm on Ubuntu 9.04. The system is not configured to boot from the raid array, but rather from a single hard disk with the OS installed on it. The raid array is 2 separate disks mirrored using RAID1 for storage. I can mount and unmount the array just fine, but when I try to have the array auto-mounted upon boot I get an error that says that either the superblock size is not correct on /dev/md0, or there is a problem with the partition table on /dev/md0. From everything I've read it seems like it gives this error because the device for the raid array, /dev/md0, doesn't contain a valid partition table.
Upon further research, it appears that a raid array (/dev/md0 in my case) is not supposed to contain a partition table since it is a raid device. Several posts I've read say that this error is ok to ignore. Every time I reboot, bootup is interrupted and I am thrown to the black terminal screen saying there might be problem with the /dev/md0 partition table. If I just type exit it will continue normal boot and I am able to mount/unmount the array just fine. My main question is how do I configure it so /dev/md0 auto-mounts upon boot without throwing the above described error? Here is some more information about my configuration:
My guess is that you're not auto-assembling the RAID before auto-mounting it. Your fdisk looks fine. I presume you put the array in fdisk to get it automatically mounted. You should update mdadm.conf, I think, to make sure the thing gets built first.
Correct, I put the array in fdisk to get it automatically mounted. My /etc/mdadm/mdadm.conf contains the following line, which takes care of the auto-assembling:
It appears that it is failing when it tries to run fsck on /dev/md0. I'm attaching a screenshot of what happens when I boot. If I hit CONTROL-D or type exit at the root prompt, it continues with normal bootup and the next thing that it does is to mount the filesystems that are set to auto-mount in fstab. It actually does successfully mount the file system as you can see from running df:
Maybe there a way to not make fsck run on /dev/md0 since it isn't an actual hard disk? Does fstab force a fsck on all volumes that are set to auto-mount?
Alright, I read up some more on fstab. The last option is whether or not to have it run fsck upon boot. I turned that option from 1 to 0 and am in business. Here's a good page on understanding fstab, from where I got the quote below: http://www.tuxfiles.org/linuxhelp/fstab.html
"The 6th column is a fsck option. fsck looks at the number in the 6th column to determine in which order the filesystems should be checked. If it's zero, fsck won't check the filesystem."
My raid1 array is clean and now automounts upon boot without any errors.
Entry from fstab now:
UUID=fb6e018b-6196-4408-97fc-53d3bd4cef59 /mnt/water ext4 relatime,errors=remount-ro 0 0
I am late at your party, but I am afraid that you've hidden the problem rather than fixed it. It looks like your partition table and your superblock don't agree on the filesystem size, it happens when something goes wrong during the array creation and the superblocks overlap with the filesystem.
You've hidden the problem, but you will hit the wall when the system is going to try writing data where the superblocks reside...
Just to make sure this isn't your case, boot on a live-cd (any with mdadm, I use sysrescuecd, partedmagic...), assemble your array if it's not done automatically.
Try fsck'ing the md device, it will probably fail.
do:
e2fsck -cc /dev/md* << put whatever your md device name is, it's will take a long time
resize2fs /dev/md*
fsck -f /dev/md* << this time it will probably work, problem solved
If you don't do that, you will likely be in trouble and loose data in the future...
e2fsck is running now. I still haven't put my critical data on the array yet until I feel 100% comfortable that everything is good to go. Worst case scenario I can just wipe the disks and rebuild the array from scratch again. I didn't have to use a live CD to boot since the raid1 array is compromised of two 1.5 Tb drives that are separate from the boot drive. 10 minutes and only .47% complete, ouch. At this rate it will take 35 hours to complete. You're not kidding when you say it will take a while. Is it normal to take that long to run?
Yes, I am afraid it's perfectly normal. I had to do it on a 500GB array lately and it took 10 hours. If it's not a problem you'd be better of starting over and trying to get things right from the beginning. If you are patient and don't need the machine for the next three days just let it run.
Large arrays are a pain in the neck anyway, it will take ages to fsck a filesystem on reboot (a lot faster with ext4), and if one of your drive fails your data will be jeopardized as long as the replacement or spare is syncing... Yesterday's technologies like raid and ext3 are out of sync with today's hardware...
The e2fsck finally finished running sometime during the night last night. I ran the other commands as you suggested and now the fsck passes. I flipped the last bit to 1 in /etc/fstab so that the fsck runs upon reboot and it was actually quite fast (it is ext4). Thanks a bunch, thveillon!
Here is the output from the commands I ran:
ben@Sokka:~$ sudo e2fsck -cc /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
The filesystem size (according to the superblock) is 366284000 blocks
The physical size of the device is 366283984 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? no
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
done
/dev/md0: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: ***** FILE SYSTEM WAS MODIFIED *****
/dev/md0: 16/91578368 files (0.0% non-contiguous), 5854335/366284000 blocks
ben@Sokka:~$ sudo resize2fs /dev/md0
[sudo] password for ben:
resize2fs 1.41.4 (27-Jan-2009)
Resizing the filesystem on /dev/md0 to 366283984 (4k) blocks.
The filesystem on /dev/md0 is now 366283984 blocks long.
All is well now, the array should be good to go.
If you want to avoid this problem, when creating a raid array from partitions already having filesystem on them (especially when creating a "degraded" array with a missing drive), shrink the partitions by ±1MB before creating the raid, then use resize2fs on the md device afterward.
A better way is to create partitions without a filesystem, then format it to the desired filesystem after raid creation, like "mkfs.ext4 /dev/md0". You can also consider partitionable raid arrays, where you use whole bare drives to create the arrays and then partition and format it (creating md0p1 etc...), but I have encountered more problems (boot-related mainly, or switching from one distribution to another) with this kind of arrays.
I seem to have the same partition table error as well on my Raid 5.
Will running the commands mentioned above degrade my data on my raid? I've already got data in my array.
the "e2fsck -cc" shouldn't wreak your data, it's a non destructive read-write test. It's disk intensive so it can push over the edge a old failure prone disk though.
The resize2fs has the potential to be nasty, it shouldn't but it could.
As always, backup important data, read the man pages before issuing some random commands found on the Internet ;-) . Since you don't provide much information about the error you are encountering, there's even a remote chance that this set of commands will be inadequate, if not counter-productive, for your particular situation.
A better way is to create partitions without a filesystem, then format it to the desired filesystem after raid creation, like "mkfs.ext4 /dev/md0". You can also consider partitionable raid arrays, where you use whole bare drives to create the arrays and then partition and format it (creating md0p1 etc...), but I have encountered more problems (boot-related mainly, or switching from one distribution to another) with this kind of arrays.
Good luck.
thveillon - by the way, since I wasn't officially using my array yet and I had the time, I rebuilt the array from scratch using the method described above (create array first, then format it with mkfs.ext4). It came up without the errors I previously had encountered. Thanks for the info!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.