Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
Am currently planning to build a 4-disk RAID server for SOHO use. Not sure which RAID level to go for at the moment, probably 0+1 or 5.
Anyway first the recommendation then the question... I'm looking at the Adaptec 2410SA and the LSI MegaRAID 150-4 SATA solutions. Requirements are:
* Hot swapping and rebuilding on drive failure.
* Array size expansion when replacing drives with larger drives.
* (Possible, though unlikely) 'Hot' RAID level changing.
Anyone have experience of these controllers on Linux and reasons to recommend one over the other?
Now the questions...
* With either of these controllers, should it fail, can an identical controller or an equivalent from the same manufacturer be used to continue running the array without hitch?
*Similarly, if I decide to rebuild a machine with a different motherboard etc (it will be a part-time desktop machine) can the array be shifted over and just restarted after the requisite setup is completed?
I'm thinking about trying the LSI megaRAID SATA 150-4 on dual athlon system with Fedora Core 3. I want to format the the raid container with XFS. Do you have any tips or suggestions from your RAID adventure?
I have a pretty decent amount of hardware/software RAID in linux, so here's my 2 cents.
Unless you absolutely NEED hotswappability please, please, please use linux software raid. The linux kernel itself has soso support for most hardware raid, and unless you want to spend some serious money on the raid controller or you are putting the raid controller in a PCI-X slot because you NEED some serious speed. Also most of the raid card manufacturers don't develop really great drivers except for in their top end cards which brings me back to spending a lot of $$$. I have had nothing but great experience with linux software raid. It is very easy, very fast and uses very little cpu power. It is also an extremely cheap solution when compared with hardware raid. You can spend $1000 on a pretty good raid card to get good support and good performance/features. Or you can spend $30 on a pci controller which will support 4 drives, and if you compile the driver for the card as a modules you can do quasi hot swap by unmounting the drive(s) unprobing the module, removing the drive or add a new drive and then re probing the module. (this is assuming you are using SATA or SCSI, this cant be dont with ata) Anyways that's my experience, if you need top performance or constant hotswapping then you can go with hardware, but software is easier cheaper and more flexible (and the array can be moved to another linux machine if need be) I can answer questions if you have any (sorry if im bad at explainging)
I'm using the dual althon system as a PVR(mythTV). It may sound like overkill, but I will be migrating it to be a HDTV PVR. No, I don't need hot swap capability. The issue I have seen with software RAID is that it only supports ext3. I have tried mythtv with software RAID 0 ext3 and I get frame drops(jittery video and sound). When I use XFS, I have no frame drops. I currently have a Promise TX4 and two 160MB SATA drives --- which I haven't been able to RAID and format to XFS. With hardware RAID I can create a container in the BIOS with the RAID level I choose and Linux should see it as just another hard drive that can be formatted with any filesystem. So my criteria is either RAID 5(redundancy and stripping) or RAID0(striping) with a XFS filesystem. If software RAID supported XFS I would use it in a second. Yes, I know I need another hard drive for RAID 5.
The dual athlon motherboard doesn't have PCI-X, but it does have 2 66MHz 32-bit PCI slots. I'm using one slot for GigE( I have a GigE switch with jumbo frame support) and the other for the RAID card.
Do be careful if you're planning to do FC3. There's a bug (https://bugzilla.redhat.com/bugzilla....cgi?id=138590) that effectively prevents use of FC3 with LSI Megaraid cards. Just a caveat--I found out the hard way by running a FC1-->FC3 upgrade and having it complete without errors--and totally unbootable.
If you use software RAID you should have no problems using any filesystem that the kernel was compiled for. Though you have to make primary partitions on each drive that you want to use for software RAID to work. Linux software RAID does not matter what filesystem it supports. It just handles data to write or read from the hard drives. You have to format /dev/md0 before mounting it. Look through the man pages for mkfs.xfs options.
Promise controllers are not hardware RAID. Its still software RAID.
Picking what RAID level depends on how valuable your data is in this case your videos and sound. If you do not care about the shows or movies that you recorded, you can use RAID 0. If you care about them and you have atleast a dual processor system and you are using software RAID, use RAID 5. It is possible to use RAID 0 with parity information (journal) on another drive.
Putting the journal on another drive will increase write performance for any journaled filesystem.
For RAID 1 and 0+1, the RocketRAID 133 provides disk mirroring, hot-spare options for automatic array-rebuilds, hot-swap support for swapping failed disks on the fly (works with Hot-Swap capable mobile racks such as Rocket Mate), and disk failure notification (audible alarms, visual warning messages).
OTOH, according to the PCGuide RAID levels description for RAID w/ Parity, maybe a pure software RAID md# solution using a m-brd's built-in controller is really better?
The technique (or techniques) used to provide redundancy in a RAID array is a primary differentiator between levels. Redundancy is provided in most RAID levels through the use of mirroring or parity (which is implemented with striping):
* Mirroring: Single RAID level 1, and multiple RAID levels 0+1 and 1+0 ("RAID 10"), employ mirroring for redundancy. One variant of RAID 1 includes mirroring of the hard disk controller as well as the disk, called duplexing.
* Striping with Parity: Single RAID levels 2 through 7, and multiple RAID levels 0+3 (aka "53"), 3+0, 0+5 and 5+0, use parity with striping for data redundancy.
* Neither Mirroring nor Parity: RAID level 0 is striping without parity; it provides no redundancy
* Both Mirroring and Striping with Parity: Multiple RAID levels 1+5 and 5+1 have the "best of both worlds", both forms of redundancy protection.
TYIA for any clarification on this; am a little unclear myself!!
Crap, the software RAID seems to be causing problems with mythtv's backend. Mythtv will record a show or two then hang. I have been seeing this for the past week. I saw some DMA error, but I can't find it any of the /var/log files.
I changed the software RAID with XFS back to individual sata drives formatted to XFS last night. I haven't see the hang issue and it has recorded about 20 shows since the change over.