LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Hardware (http://www.linuxquestions.org/questions/linux-hardware-18/)
-   -   RAID Devices on Linux (http://www.linuxquestions.org/questions/linux-hardware-18/raid-devices-on-linux-461739/)

depam 07-07-2006 04:38 AM

RAID Devices on Linux
 
Guys,

I wanted to setup a RAID on our file server. Can you please tell me what are the things that I need? Aside from 2 80 GB and 1 120 GB HD, what else do I need to make it run on Linux?

AwesomeMachine 07-07-2006 11:53 PM

There are software raid, hardware/software raid (all cheap raid cards, and all motherboard based raid), and hardware raid. If you're going to go cheap, and I guarantee you will pay later, you can configure software raid with the md tools in linux. You just hook up the drives, and make sure the one with the good data is first (i.e. Paralell IDE 0 master is 0, usually. Paralell IDE 0 slave is 1, usually. If there are SATA connectors, and the SATA controller is configured, or active, sometimes SATA 1 is 0, and SATA 0 is 1. So, then SATA comes before PATA, or PIDE. IDE 1 might be last, but it might be in the middle.

It is a really bad idea to use different size drives with software raid. I think it's a really bad idea to use anything but hardware raid, and then only for data that is online 24/7, and changes frequently. A set of relational database tables, or a busy website are good examples. If you don't meet those conditions you are much better off with a backup machine, where you backup your data. In my opinion, the only decent PC based raid, for small systems, is raid 5 on a processor based, cached, hardware raid controller. This can be configured with four drives, any three of which contain all the data on all four. So, if one drive fails, you simply pop in a new one. You start to get a lot of parity data on each drive with only three drives, but any two can contain all the data on the three. You just waste more space with the parity data.

Now, unless you have hot swappable raid, you need to shut down to switch drives. When you boot back up the array will rebuild itself. 3ware makes the cheapest decent hardware raid cards that are compatible with linux. But, if your server is aging, you might want to hurry because 3ware has gone over to PCI 2.2. There are still plenty of older cards available now. I've seen them for $170.00.

If you won't listen to me, and you want to use software raid, use linux software raid with md tools, but get some drives that are all the same. And, get some extra drives for when one fails so it is exactly like the others.

depam 07-10-2006 09:02 AM

So, where do I start? Do I need to install any package in Linux? I'm using SM 3.4.3

jschiwal 07-10-2006 02:24 PM

For ide and sata drives, you will be using software raid even if you are using an onboard controller. Using a server that has hot swappable scsi drive that are easy to access and that are controlled by the scsi raid subsystem would be the best solution. You could repair the raid array by first reseating the bad drive, and if it turns out there is hardware problem with the drive, replacing it. The system will repair the drive on the fly, and the server will keep running.

For ide and sata drives, there is only software raid. Even if you have an onboard controller.
If you want to use Linux raid, there are two programs to setup and control them. I think the one that is used now is "mdadm". Some distros like SuSE and Mandriva have GUI partitioner programs that allow you to set up you raid that way.

You could read the manpages for "mdadm" ,"mdadm.conf", and "md".
Consider printing then out for off-line study:
man -t mdadm | lpr
man -t mdadm.conf | lpr
man -t md | lpr

There are also some instructions in /usr/share/doc/packages/mdadm/.

The drives that make up an array should be at least the same type. To play it safe, they could be identical as well. You could have 2x80 GB drives and 2x120 GB drives but for redundancy it would be better to have more than two drives. The "md" and "mdadm" man pages detail the different types of raid arrays.

For performance reasons, if using ide raid, don't use the slave connectors on the drive controllers. There are different types of raid arrays. LINEAR (not really a raid array) and RAID0 offer no redundancy. Raid 1 uses mirroring, if the first drive goes bad, the second drive is used instead. RAID5 and RAID6 use parity for redunancy. There is also raid 10 which is raid 1+0. It combines RAID0 and RAID1 to allow you to have a raid partition that is larger than any of the drives, and uses RAID 1 for redunancy.

You may also want to use Google to search for "linux raid mdadm". You may run into some howtos. The /usr/share/packages/mdadm also has sample configurations.

kdb4 07-10-2006 03:36 PM

Thanks for the info


All times are GMT -5. The time now is 10:28 PM.