LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Ubuntu
User Name
Password
Ubuntu This forum is for the discussion of Ubuntu Linux.

Notices


Reply
  Search this Thread
Old 09-26-2009, 02:51 AM   #1
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Rep: Reputation: 0
Raid1 array says /dev/md0 does not have a valid partition table, won't auto mount


I recently setup a Raid1 software array using mdadm on Ubuntu 9.04. The system is not configured to boot from the raid array, but rather from a single hard disk with the OS installed on it. The raid array is 2 separate disks mirrored using RAID1 for storage. I can mount and unmount the array just fine, but when I try to have the array auto-mounted upon boot I get an error that says that either the superblock size is not correct on /dev/md0, or there is a problem with the partition table on /dev/md0. From everything I've read it seems like it gives this error because the device for the raid array, /dev/md0, doesn't contain a valid partition table.

Upon further research, it appears that a raid array (/dev/md0 in my case) is not supposed to contain a partition table since it is a raid device. Several posts I've read say that this error is ok to ignore. Every time I reboot, bootup is interrupted and I am thrown to the black terminal screen saying there might be problem with the /dev/md0 partition table. If I just type exit it will continue normal boot and I am able to mount/unmount the array just fine. My main question is how do I configure it so /dev/md0 auto-mounts upon boot without throwing the above described error? Here is some more information about my configuration:

Output from mdstat:
root@Sokka:/etc/init.d# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc1[1] sdb1[0]
1465135936 blocks [2/2] [UU]

unused devices: <none>


/etc/fstab entry:
UUID=fb6e018b-6196-4408-97fc-53d3bd4cef59 /mnt/water ext4 relatime,errors=remount-ro 0 1


Output from fdisk -l
root@Sokka:/etc/init.d# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xc168c168

Device Boot Start End Blocks Id System
/dev/sda1 * 1 24316 195318238+ 7 HPFS/NTFS
/dev/sda2 24317 60801 293065762+ 5 Extended
/dev/sda5 24317 24565 2000061 82 Linux swap / Solaris
/dev/sda6 24566 60801 291065638+ 83 Linux

Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0007551f

Device Boot Start End Blocks Id System
/dev/sdb1 1 182401 1465136001 fd Linux raid autodetect

Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0004f9a4

Device Boot Start End Blocks Id System
/dev/sdc1 1 182401 1465136001 fd Linux raid autodetect

Disk /dev/md0: 1500.2 GB, 1500299198464 bytes
2 heads, 4 sectors/track, 366283984 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000
Disk /dev/md0 doesn't contain a valid partition table

Last edited by bhepdogg; 09-28-2009 at 04:02 PM.
 
Old 09-29-2009, 06:53 PM   #2
mostlyharmless
Senior Member
 
Registered: Jan 2008
Distribution: Arch/Manjaro, might try Slackware again
Posts: 1,851
Blog Entries: 14

Rep: Reputation: 284Reputation: 284Reputation: 284
My guess is that you're not auto-assembling the RAID before auto-mounting it. Your fdisk looks fine. I presume you put the array in fdisk to get it automatically mounted. You should update mdadm.conf, I think, to make sure the thing gets built first.

Hope that helps
 
Old 09-30-2009, 01:22 AM   #3
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
Correct, I put the array in fdisk to get it automatically mounted. My /etc/mdadm/mdadm.conf contains the following line, which takes care of the auto-assembling:

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=2576e705:c3da6d6c:7bd08a6c:712e832d

It appears that it is failing when it tries to run fsck on /dev/md0. I'm attaching a screenshot of what happens when I boot. If I hit CONTROL-D or type exit at the root prompt, it continues with normal bootup and the next thing that it does is to mount the filesystems that are set to auto-mount in fstab. It actually does successfully mount the file system as you can see from running df:

ben@Sokka:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 274G 15G 246G 6% /
...
/dev/md0 1.4T 198M 1.3T 1% /mnt/water

Maybe there a way to not make fsck run on /dev/md0 since it isn't an actual hard disk? Does fstab force a fsck on all volumes that are set to auto-mount?
Attached Thumbnails
Click image for larger version

Name:	boot-error.jpg
Views:	120
Size:	186.0 KB
ID:	1625  
 
Old 09-30-2009, 02:43 AM   #4
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
Alright, I read up some more on fstab. The last option is whether or not to have it run fsck upon boot. I turned that option from 1 to 0 and am in business. Here's a good page on understanding fstab, from where I got the quote below:
http://www.tuxfiles.org/linuxhelp/fstab.html

"The 6th column is a fsck option. fsck looks at the number in the 6th column to determine in which order the filesystems should be checked. If it's zero, fsck won't check the filesystem."

My raid1 array is clean and now automounts upon boot without any errors.

Entry from fstab now:
UUID=fb6e018b-6196-4408-97fc-53d3bd4cef59 /mnt/water ext4 relatime,errors=remount-ro 0 0
 
Old 10-03-2009, 02:57 PM   #5
thveillon
Member
 
Registered: Dec 2007
Posts: 59

Rep: Reputation: 16
Hi,

I am late at your party, but I am afraid that you've hidden the problem rather than fixed it. It looks like your partition table and your superblock don't agree on the filesystem size, it happens when something goes wrong during the array creation and the superblocks overlap with the filesystem.

You've hidden the problem, but you will hit the wall when the system is going to try writing data where the superblocks reside...

Just to make sure this isn't your case, boot on a live-cd (any with mdadm, I use sysrescuecd, partedmagic...), assemble your array if it's not done automatically.
Try fsck'ing the md device, it will probably fail.

do:

e2fsck -cc /dev/md* << put whatever your md device name is, it's will take a long time
resize2fs /dev/md*
fsck -f /dev/md* << this time it will probably work, problem solved


If you don't do that, you will likely be in trouble and loose data in the future...

My 2cts

Tom
 
Old 10-03-2009, 03:21 PM   #6
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
e2fsck is running now. I still haven't put my critical data on the array yet until I feel 100% comfortable that everything is good to go. Worst case scenario I can just wipe the disks and rebuild the array from scratch again. I didn't have to use a live CD to boot since the raid1 array is compromised of two 1.5 Tb drives that are separate from the boot drive. 10 minutes and only .47% complete, ouch. At this rate it will take 35 hours to complete. You're not kidding when you say it will take a while. Is it normal to take that long to run?
 
Old 10-04-2009, 02:57 AM   #7
thveillon
Member
 
Registered: Dec 2007
Posts: 59

Rep: Reputation: 16
Yes, I am afraid it's perfectly normal. I had to do it on a 500GB array lately and it took 10 hours. If it's not a problem you'd be better of starting over and trying to get things right from the beginning. If you are patient and don't need the machine for the next three days just let it run.
Large arrays are a pain in the neck anyway, it will take ages to fsck a filesystem on reboot (a lot faster with ext4), and if one of your drive fails your data will be jeopardized as long as the replacement or spare is syncing... Yesterday's technologies like raid and ext3 are out of sync with today's hardware...
 
Old 10-05-2009, 10:50 AM   #8
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
The e2fsck finally finished running sometime during the night last night. I ran the other commands as you suggested and now the fsck passes. I flipped the last bit to 1 in /etc/fstab so that the fsck runs upon reboot and it was actually quite fast (it is ext4). Thanks a bunch, thveillon!

Here is the output from the commands I ran:

ben@Sokka:~$ sudo e2fsck -cc /dev/md0
e2fsck 1.41.4 (27-Jan-2009)
The filesystem size (according to the superblock) is 366284000 blocks
The physical size of the device is 366283984 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? no


Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
badblocks: Invalid argument during seek
done
/dev/md0: Updating bad block inode.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

/dev/md0: ***** FILE SYSTEM WAS MODIFIED *****
/dev/md0: 16/91578368 files (0.0% non-contiguous), 5854335/366284000 blocks



ben@Sokka:~$ sudo resize2fs /dev/md0
[sudo] password for ben:
resize2fs 1.41.4 (27-Jan-2009)
Resizing the filesystem on /dev/md0 to 366283984 (4k) blocks.
The filesystem on /dev/md0 is now 366283984 blocks long.




ben@Sokka:~$ sudo fsck -f /dev/md0
fsck 1.41.4 (27-Jan-2009)
e2fsck 1.41.4 (27-Jan-2009)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Block bitmap differences: -9257
Fix<y>? yes

Free blocks count wrong for group #0 (981, counted=982).
Fix<y>? yes

Free blocks count wrong (360429665, counted=360429666).
Fix<y>? yes


/dev/md0: ***** FILE SYSTEM WAS MODIFIED *****
/dev/md0: 16/91578368 files (0.0% non-contiguous), 5854318/366283984 blocks



ben@Sokka:~$ sudo fsck -f /dev/md0
fsck 1.41.4 (27-Jan-2009)
e2fsck 1.41.4 (27-Jan-2009)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md0: 16/91578368 files (0.0% non-contiguous), 5854318/366283984 blocks
 
Old 10-06-2009, 01:46 AM   #9
thveillon
Member
 
Registered: Dec 2007
Posts: 59

Rep: Reputation: 16
All is well now, the array should be good to go.
If you want to avoid this problem, when creating a raid array from partitions already having filesystem on them (especially when creating a "degraded" array with a missing drive), shrink the partitions by ±1MB before creating the raid, then use resize2fs on the md device afterward.
A better way is to create partitions without a filesystem, then format it to the desired filesystem after raid creation, like "mkfs.ext4 /dev/md0". You can also consider partitionable raid arrays, where you use whole bare drives to create the arrays and then partition and format it (creating md0p1 etc...), but I have encountered more problems (boot-related mainly, or switching from one distribution to another) with this kind of arrays.

Good luck.
 
Old 10-27-2009, 05:05 AM   #10
remeron
LQ Newbie
 
Registered: Oct 2009
Posts: 1

Rep: Reputation: 0
I seem to have the same partition table error as well on my Raid 5.
Will running the commands mentioned above degrade my data on my raid? I've already got data in my array.
 
Old 10-29-2009, 11:34 AM   #11
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
I didn't have any mission critical data on mine when I ran the commands, but I did have a few test files. They survived through the ordeal just fine.
 
Old 10-30-2009, 03:26 AM   #12
thveillon
Member
 
Registered: Dec 2007
Posts: 59

Rep: Reputation: 16
Hi remeron,

the "e2fsck -cc" shouldn't wreak your data, it's a non destructive read-write test. It's disk intensive so it can push over the edge a old failure prone disk though.
The resize2fs has the potential to be nasty, it shouldn't but it could.

As always, backup important data, read the man pages before issuing some random commands found on the Internet ;-) . Since you don't provide much information about the error you are encountering, there's even a remote chance that this set of commands will be inadequate, if not counter-productive, for your particular situation.
 
Old 10-30-2009, 03:50 PM   #13
bhepdogg
LQ Newbie
 
Registered: Sep 2006
Distribution: Ubuntu 9.04
Posts: 13

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by thveillon View Post
A better way is to create partitions without a filesystem, then format it to the desired filesystem after raid creation, like "mkfs.ext4 /dev/md0". You can also consider partitionable raid arrays, where you use whole bare drives to create the arrays and then partition and format it (creating md0p1 etc...), but I have encountered more problems (boot-related mainly, or switching from one distribution to another) with this kind of arrays.

Good luck.

thveillon - by the way, since I wasn't officially using my array yet and I had the time, I rebuilt the array from scratch using the method described above (create array first, then format it with mkfs.ext4). It came up without the errors I previously had encountered. Thanks for the info!
 
  


Reply

Tags
linux, raid, raid1, ubuntu


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Disk /dev/sdb doesn't contain a valid partition table cytchiu Linux - Enterprise 4 06-03-2009 04:14 PM
doesn't contain a valid partition table after creating raid1 via kickstart ncsuapex Linux - Software 3 01-22-2009 10:26 AM
unable to auto mount /dev/md0 RAID 1 TheMatrix64 Linux - Newbie 4 04-09-2007 07:49 PM
Disk /dev/hdb doesn't contain a valid partition table manudath Linux - Hardware 2 09-01-2006 02:23 PM
Disk /dev/hdb doesn't contain a valid partition table rajeshdorai Linux - Hardware 1 04-12-2006 07:29 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Ubuntu

All times are GMT -5. The time now is 04:42 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration