Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
There are software raid, hardware/software raid (all cheap raid cards, and all motherboard based raid), and hardware raid. If you're going to go cheap, and I guarantee you will pay later, you can configure software raid with the md tools in linux. You just hook up the drives, and make sure the one with the good data is first (i.e. Paralell IDE 0 master is 0, usually. Paralell IDE 0 slave is 1, usually. If there are SATA connectors, and the SATA controller is configured, or active, sometimes SATA 1 is 0, and SATA 0 is 1. So, then SATA comes before PATA, or PIDE. IDE 1 might be last, but it might be in the middle.
It is a really bad idea to use different size drives with software raid. I think it's a really bad idea to use anything but hardware raid, and then only for data that is online 24/7, and changes frequently. A set of relational database tables, or a busy website are good examples. If you don't meet those conditions you are much better off with a backup machine, where you backup your data. In my opinion, the only decent PC based raid, for small systems, is raid 5 on a processor based, cached, hardware raid controller. This can be configured with four drives, any three of which contain all the data on all four. So, if one drive fails, you simply pop in a new one. You start to get a lot of parity data on each drive with only three drives, but any two can contain all the data on the three. You just waste more space with the parity data.
Now, unless you have hot swappable raid, you need to shut down to switch drives. When you boot back up the array will rebuild itself. 3ware makes the cheapest decent hardware raid cards that are compatible with linux. But, if your server is aging, you might want to hurry because 3ware has gone over to PCI 2.2. There are still plenty of older cards available now. I've seen them for $170.00.
If you won't listen to me, and you want to use software raid, use linux software raid with md tools, but get some drives that are all the same. And, get some extra drives for when one fails so it is exactly like the others.
For ide and sata drives, you will be using software raid even if you are using an onboard controller. Using a server that has hot swappable scsi drive that are easy to access and that are controlled by the scsi raid subsystem would be the best solution. You could repair the raid array by first reseating the bad drive, and if it turns out there is hardware problem with the drive, replacing it. The system will repair the drive on the fly, and the server will keep running.
For ide and sata drives, there is only software raid. Even if you have an onboard controller.
If you want to use Linux raid, there are two programs to setup and control them. I think the one that is used now is "mdadm". Some distros like SuSE and Mandriva have GUI partitioner programs that allow you to set up you raid that way.
You could read the manpages for "mdadm" ,"mdadm.conf", and "md".
Consider printing then out for off-line study:
man -t mdadm | lpr
man -t mdadm.conf | lpr
man -t md | lpr
There are also some instructions in /usr/share/doc/packages/mdadm/.
The drives that make up an array should be at least the same type. To play it safe, they could be identical as well. You could have 2x80 GB drives and 2x120 GB drives but for redundancy it would be better to have more than two drives. The "md" and "mdadm" man pages detail the different types of raid arrays.
For performance reasons, if using ide raid, don't use the slave connectors on the drive controllers. There are different types of raid arrays. LINEAR (not really a raid array) and RAID0 offer no redundancy. Raid 1 uses mirroring, if the first drive goes bad, the second drive is used instead. RAID5 and RAID6 use parity for redunancy. There is also raid 10 which is raid 1+0. It combines RAID0 and RAID1 to allow you to have a raid partition that is larger than any of the drives, and uses RAID 1 for redunancy.
You may also want to use Google to search for "linux raid mdadm". You may run into some howtos. The /usr/share/packages/mdadm also has sample configurations.