LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 08-05-2009, 08:53 PM   #1
wjtaylor
Member
 
Registered: Feb 2009
Posts: 78

Rep: Reputation: 15
raid 5 invalid partition table


Hi,

I have a raid 5 with an invalid partition table.

It is made up of 3 drives that also have invalid partition tables.

The 3 drives were entirely for the raid. The raid did not contain the OS, just data.

Filesystem is JFS. I can read/write to the raid fine.

Is there a recommended procedure for recovery?
Can I just set fdisk to include all the blocks on the drive?

fdisk -l output below (sda is my OS drive):

Code:
Disk /dev/sda: 41.1 GB, 41110142976 bytes
255 heads, 63 sectors/track, 4998 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x0f800000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               2         131     1044225   83  Linux
/dev/sda2   *         132        1436    10482412+  83  Linux
/dev/sda3            1437        4438    24113565   83  Linux
/dev/sda4            4439        4996     4482135   82  Linux swap / Solaris

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x003d5ace

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x003d5ace

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/md0: 2000.4 GB, 2000409591808 bytes
2 heads, 4 sectors/track, 488381248 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table
Tesla:~ #
Thanks!
WT
 
Old 08-06-2009, 07:08 PM   #2
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Personally I'd backup all the data, then re-fdisk the raid drives, 1 partition per drive.
Make sure you set the correct type 'fd' on each partition and 'w'rite the changes to the disks.
You may(!) need to reboot ensure the changes take.
 
Old 08-22-2009, 03:12 PM   #3
wjtaylor
Member
 
Registered: Feb 2009
Posts: 78

Original Poster
Rep: Reputation: 15
Quote:
Originally Posted by chrism01 View Post
Personally I'd backup all the data, then re-fdisk the raid drives, 1 partition per drive.
Make sure you set the correct type 'fd' on each partition and 'w'rite the changes to the disks.
You may(!) need to reboot ensure the changes take.
Thanks for your suggestion. Unfortunately, it didn't work. It looks good until I reboot it, then it recorrupts.

Any other thoughts on this?

md0 mounts and the data I've used is in tact...

Thanks,
WT
 
Old 08-24-2009, 12:10 AM   #4
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Need more details; what exactly(!) do you mean by re-corrupt?
Did you reload the data after the fdisk+reboot. The old on-disk copy is already corrupt by the sounds of it. You prob need to go to a prev 'known good' backup ie one taken before your troubles started.
 
Old 09-02-2009, 01:27 PM   #5
MrNice
LQ Newbie
 
Registered: Aug 2009
Location: Ireland
Posts: 8

Rep: Reputation: 2
Hi there,

I have exactly the same problem.
I have a raid 5 with 4 full drives used.
Everything seem to be ok, I can mount but I can't write because of the invalid partition table.
This is a new setup and there is no data in the drive ;-)

Chrism01 what do you mean by "Make sure you set the correct type 'fd' on each partition"?
Wjtaylor did you fix your issue? how?

Thanks
 
Old 09-02-2009, 11:31 PM   #6
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Every partition type you can have has a 2 char hex code to id it http://www.win.tue.nl/~aeb/partition...n_types-1.html

82 Linux swap
83 Linux native partition
8e Linux Logical Volume Manager partition
fd Linux RAID
 
Old 09-03-2009, 02:20 AM   #7
MrNice
LQ Newbie
 
Registered: Aug 2009
Location: Ireland
Posts: 8

Rep: Reputation: 2
Thanks chrism01 I did not know that for raid

If I resume:
1- create one partition in each hard drive with fd type
2- create/assemble raid as one new device
3- format this new device
4- mount this new device

If I am right I have to do it again from scratch. Good for learning!

Take care
 
Old 09-03-2009, 05:51 AM   #8
garydale
Member
 
Registered: Feb 2007
Posts: 142

Rep: Reputation: 23
Quote:
Originally Posted by MrNice View Post
Thanks chrism01 I did not know that for raid

If I resume:
1- create one partition in each hard drive with fd type
2- create/assemble raid as one new device
3- format this new device
4- mount this new device

If I am right I have to do it again from scratch. Good for learning!

Take care
You can change the partition type without creating a whole new partition table. Providing the drive is not in use, a reboot is usually not necessary. Booting from a live CD is recommended.

You may have to remove and re-add each partition in turn to the array, letting it re-sync before doing the next partition. DO NOT do two partitions in the same RAID device at the same time!
 
Old 09-07-2009, 09:18 AM   #9
wjtaylor
Member
 
Registered: Feb 2009
Posts: 78

Original Poster
Rep: Reputation: 15
Well, here's where I'm at.

I've just returned from travelling and have done some work on the box.

Here's a little background.

I have 1 IDE drive w/ the OS on it. The 3 SATA drives make up the raid.

The MB was not detecting the HDDs properly, so I adjusted some settings (IDE, IHCI). They drives were properly detected in the BIOS, and booted up fine.

mdadm emailed me however to mention a degraded raid event. I show one drive down. I may have to recreate the raid and reload data. That's fine. I don't have much on it at the moment.

Has anyone encountered this before?

Here's the million dollar question though. I am using mdadm for linux software raid, NOT AHCI for motherboard raid. What do I need to be aware of for AHCI and mdadm to coexist and provide reliable raid (any raid level) storage?

This brings up several questions about raid maintenance?

I just put in a SCSI card and will backup to tape.
What maintenance/safety procedures should I perform on the raid?
(fsck?, parity check, etc)

Thanks,
WT
 
Old 09-07-2009, 11:35 PM   #10
garydale
Member
 
Registered: Feb 2007
Posts: 142

Rep: Reputation: 23
You're asking two different questions.

1) re. the degraded array, you simply need to get the bad drive/partition working again and mdadm will re-sync it. If your drives aren't identical, you may have to do some interesting experiments to make sure the partition sizes allow the drive to fit into your RAID array.

If mdadm doesn't find enough space in the degraded partition, it won't add it. It needs space at the end of the partition to write the superblock. When resizing my RAID array recently, I had to do some experiments to get the RAID array, the partitions and file system (ext3) to all work together.

You can grow the file system to automatically take up the full space, but not the RAID array. That has to be resized to fit into the partition while allowing space for the superblock. It's trial and error as mdadm will tell you if it can't make it fit, but will happily ignore excess space at the end. You have to play around to make it the largest possible size for your partitions.


2) AHCI is NOT RAID. It's a protocol for accessing the SATA drives. The RAID on your motherboard is almost certainly not as good as software RAID and I'd recommend not using it.

Software RAID and hardware RAID are in no way compatible. The RAID on your motherboard combines whole disks into a RAID array which you can then partition. Software RAID combines partitions into a RAID array. The partitions need to be marked as type "fd".

You can use both in the same computer - even using the same drives. Although I don't see any reason to run software RAID over hardware RAID, there is nothing to stop you from doing it. More sanely, you may have three small drives in a hardware RAID 5 array to boot from, and some larger drives configured using RAID 5 or 6 for data. This gets around the issue of booting into a RAID 5 array since the kernel just sees the hardware RAID drive.

However, disk space being what is, I'd just create the boot partition as a small software RAID 1 array with multiple mirrors (you're not limited to one) and use the rest of the disk space for RAID 5.

Unless you're a Windows user, I'd forget about the hardware RAID on your motherboard.

Last edited by garydale; 09-08-2009 at 05:17 AM.
 
Old 09-07-2009, 11:39 PM   #11
garydale
Member
 
Registered: Feb 2007
Posts: 142

Rep: Reputation: 23
Sorry, your third question: You don't need to do anything special beyond what you'd do for any disk drive. SMART monitoring is always a good idea.

Since you're already getting notices of degraded arrays, just make sure you don't ignore them!

On a preventative note: modern hard drives can generate a lot of heat. Heat shortens their life. Makes sure you have enough air blowing over the drives to keep them cool (no hotter than 40 Celsius normally).
 
Old 09-08-2009, 02:06 AM   #12
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
You can only boot from RAID 1
Quote:
If you are making a RAID partition of /boot/, you must choose RAID level 1, and it must use one of the first two drives (IDE first, SCSI second). If you are not creating a seperate RAID partition of /boot/, and you are making a RAID partition for the root file system (/), it must be RAID level 1 and must use one of the first two drives (IDE first, SCSI second).
http://www.linuxtopia.org/online_boo...id-config.html
Although that's an RH doc, I believe all Linux use the same technology for this (SW) RAID. (?)
 
Old 09-08-2009, 05:15 AM   #13
garydale
Member
 
Registered: Feb 2007
Posts: 142

Rep: Reputation: 23
Quote:
Originally Posted by chrism01 View Post
You can only boot from RAID 1

http://www.linuxtopia.org/online_boo...id-config.html
Although that's an RH doc, I believe all Linux use the same technology for this (SW) RAID. (?)

That's true of software RAID, but a hardware RAID array looks like just one large drive to the operating system. This also means the hardware controller needs to know about the RAID configuration. Otherwise it can't start the ARRAY.

The better hardware RAID cards allow you to save the RAID configuration data so you can reload it if you have to replace the hardware controller. If your motherboard goes, you may not want, or be able, to replace it with same model even if you were able to make a backup of the RAID configuration data.

With software RAID, the configuration is stored on the disks. You can take your drives, mount them in a different Linux box and your RAID arrays will still be there. However, this precludes booting from any RAID array that spreads the data across disk drives.

So you're left with allocating 75 - 100M as a RAID 1 array, with multiple mirror drives (While most RAID 1 configurations use a single mirror drive, you can actually have as many as you want). Personally, I think it's a good trade-off.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Invalid partition table on one computer, not on another dcroxton Linux - Hardware 6 06-24-2007 06:35 AM
Help --> Invalid Partition Table? doody Linux - Newbie 8 09-01-2004 01:33 AM
Invalid Partition Table xiojqwnko Slackware 2 04-04-2004 05:27 PM
invalid partition table MunterMan Linux - Software 3 11-26-2003 02:45 PM
partition table invalid or corrupt colindoig Linux - Software 3 10-01-2003 09:14 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 11:13 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration