Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Personally I'd backup all the data, then re-fdisk the raid drives, 1 partition per drive.
Make sure you set the correct type 'fd' on each partition and 'w'rite the changes to the disks.
You may(!) need to reboot ensure the changes take.
Personally I'd backup all the data, then re-fdisk the raid drives, 1 partition per drive.
Make sure you set the correct type 'fd' on each partition and 'w'rite the changes to the disks.
You may(!) need to reboot ensure the changes take.
Thanks for your suggestion. Unfortunately, it didn't work. It looks good until I reboot it, then it recorrupts.
Need more details; what exactly(!) do you mean by re-corrupt?
Did you reload the data after the fdisk+reboot. The old on-disk copy is already corrupt by the sounds of it. You prob need to go to a prev 'known good' backup ie one taken before your troubles started.
I have exactly the same problem.
I have a raid 5 with 4 full drives used.
Everything seem to be ok, I can mount but I can't write because of the invalid partition table.
This is a new setup and there is no data in the drive ;-)
Chrism01 what do you mean by "Make sure you set the correct type 'fd' on each partition"?
Wjtaylor did you fix your issue? how?
If I resume:
1- create one partition in each hard drive with fd type
2- create/assemble raid as one new device
3- format this new device
4- mount this new device
If I am right I have to do it again from scratch. Good for learning!
If I resume:
1- create one partition in each hard drive with fd type
2- create/assemble raid as one new device
3- format this new device
4- mount this new device
If I am right I have to do it again from scratch. Good for learning!
Take care
You can change the partition type without creating a whole new partition table. Providing the drive is not in use, a reboot is usually not necessary. Booting from a live CD is recommended.
You may have to remove and re-add each partition in turn to the array, letting it re-sync before doing the next partition. DO NOT do two partitions in the same RAID device at the same time!
I've just returned from travelling and have done some work on the box.
Here's a little background.
I have 1 IDE drive w/ the OS on it. The 3 SATA drives make up the raid.
The MB was not detecting the HDDs properly, so I adjusted some settings (IDE, IHCI). They drives were properly detected in the BIOS, and booted up fine.
mdadm emailed me however to mention a degraded raid event. I show one drive down. I may have to recreate the raid and reload data. That's fine. I don't have much on it at the moment.
Has anyone encountered this before?
Here's the million dollar question though. I am using mdadm for linux software raid, NOT AHCI for motherboard raid. What do I need to be aware of for AHCI and mdadm to coexist and provide reliable raid (any raid level) storage?
This brings up several questions about raid maintenance?
I just put in a SCSI card and will backup to tape.
What maintenance/safety procedures should I perform on the raid?
(fsck?, parity check, etc)
1) re. the degraded array, you simply need to get the bad drive/partition working again and mdadm will re-sync it. If your drives aren't identical, you may have to do some interesting experiments to make sure the partition sizes allow the drive to fit into your RAID array.
If mdadm doesn't find enough space in the degraded partition, it won't add it. It needs space at the end of the partition to write the superblock. When resizing my RAID array recently, I had to do some experiments to get the RAID array, the partitions and file system (ext3) to all work together.
You can grow the file system to automatically take up the full space, but not the RAID array. That has to be resized to fit into the partition while allowing space for the superblock. It's trial and error as mdadm will tell you if it can't make it fit, but will happily ignore excess space at the end. You have to play around to make it the largest possible size for your partitions.
2) AHCI is NOT RAID. It's a protocol for accessing the SATA drives. The RAID on your motherboard is almost certainly not as good as software RAID and I'd recommend not using it.
Software RAID and hardware RAID are in no way compatible. The RAID on your motherboard combines whole disks into a RAID array which you can then partition. Software RAID combines partitions into a RAID array. The partitions need to be marked as type "fd".
You can use both in the same computer - even using the same drives. Although I don't see any reason to run software RAID over hardware RAID, there is nothing to stop you from doing it. More sanely, you may have three small drives in a hardware RAID 5 array to boot from, and some larger drives configured using RAID 5 or 6 for data. This gets around the issue of booting into a RAID 5 array since the kernel just sees the hardware RAID drive.
However, disk space being what is, I'd just create the boot partition as a small software RAID 1 array with multiple mirrors (you're not limited to one) and use the rest of the disk space for RAID 5.
Unless you're a Windows user, I'd forget about the hardware RAID on your motherboard.
Sorry, your third question: You don't need to do anything special beyond what you'd do for any disk drive. SMART monitoring is always a good idea.
Since you're already getting notices of degraded arrays, just make sure you don't ignore them!
On a preventative note: modern hard drives can generate a lot of heat. Heat shortens their life. Makes sure you have enough air blowing over the drives to keep them cool (no hotter than 40 Celsius normally).
If you are making a RAID partition of /boot/, you must choose RAID level 1, and it must use one of the first two drives (IDE first, SCSI second). If you are not creating a seperate RAID partition of /boot/, and you are making a RAID partition for the root file system (/), it must be RAID level 1 and must use one of the first two drives (IDE first, SCSI second).
That's true of software RAID, but a hardware RAID array looks like just one large drive to the operating system. This also means the hardware controller needs to know about the RAID configuration. Otherwise it can't start the ARRAY.
The better hardware RAID cards allow you to save the RAID configuration data so you can reload it if you have to replace the hardware controller. If your motherboard goes, you may not want, or be able, to replace it with same model even if you were able to make a backup of the RAID configuration data.
With software RAID, the configuration is stored on the disks. You can take your drives, mount them in a different Linux box and your RAID arrays will still be there. However, this precludes booting from any RAID array that spreads the data across disk drives.
So you're left with allocating 75 - 100M as a RAID 1 array, with multiple mirror drives (While most RAID 1 configurations use a single mirror drive, you can actually have as many as you want). Personally, I think it's a good trade-off.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.