LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 01-28-2010, 04:38 PM   #1
ineloquucius
LQ Newbie
 
Registered: Jan 2010
Posts: 27

Rep: Reputation: 0
Can Linux tools do the work of a high-end (expensive) SATA controller ?


I'm not all that familiar with what it is that the expensive SATA cards do, other than providing on-board cache and many ports, but all things considered what can't a CPU and lot's-o-RAM do that an expensive SATA card can? This assumes, of course, that you weren't planning on using the CPU to run other applications (i.e. a SAN type setup).
 
Old 01-28-2010, 05:13 PM   #2
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,410

Rep: Reputation: 141Reputation: 141
The system can do just about everything a hardware RAID controller can do. There is a situation when using a software RAID-1 array containing your system files ("/", "/boot", etc), where, if you don't have the mirror setup properly, you might not be able to boot from it if the primary goes down. Of course, you should be able to use a liveCD to fix things, in such a situation. Other than that (mostly a time and convenience issue) it's just CPU cycles and IO bandwidth consumption.

Last edited by Quakeboy02; 01-28-2010 at 05:14 PM.
 
Old 01-29-2010, 01:59 AM   #3
r3sistance
Senior Member
 
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375

Rep: Reputation: 217Reputation: 217Reputation: 217
With alot of experience with RAID, I would say that Hardware Raid cards are alot more stable then any other form of Raid. Software Raid works suitably well but host/fake and mobo embedded raid cause more issues then anybody should ever, ever have to see. As it goes, my experience of host/fake raid and embedded raid says it usually doesn't work properly with CentOS and RHEL very well, not sure about other distributions.

Hardware Raid is dedicated to the raid function and lowers the burden from the CPU and IO, it should be noted that more advanced/complex raid levels tend to produce better results on hardware raid where with software raid the increase in CPU load tends to have the reserve effect of lowing performance. Generally when you are talking about reaching a point where the performance would be truly be noticable the CPU would likely already be in a state under load. Hardware Raid cards are also more specialised in areas like Disc performance, more so under Raid conditions.

Lastly hardware raid's tend to be much better at rebuilding then software raids, software raid everything will likely need to be manually reconfigured where a hardware raid would more likely do the rebuilding automatically with less work and less raid down time.
 
1 members found this post helpful.
Old 01-29-2010, 11:20 AM   #4
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,311
Blog Entries: 4

Rep: Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152
I totally agree with r3sistance on this one.

"Dude, this is your priceless data we're talking about!" If you're going to spend money on anything at all, spend it on the best, most reliable and speedy hardware you can buy. This is not the right place to "cut corners."
 
Old 01-29-2010, 11:46 AM   #5
Quakeboy02
Senior Member
 
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,410

Rep: Reputation: 141Reputation: 141
I also totally agree with r3sistance, but not so much with sundialsvcs.

Whether your arrays are on hardware or software controllers shouldn't be an impact to their reliability. If you use a RAID0 you're taking a big risk, no matter what. OTOH, if you're running a server, then you have no business using softraids, because the time and effort to restore an array, or a system, may have an unreasonable impact on your customers.

For the home user, there's the issue of dependence on a difficult to get controller. Will your system be down for a week if your controller fails?

There are a number of issues to be considered when thinking about running RAID. One big issue is whether you really gain anything in your particular situation. For the home user with a bunch of DVDs or other, mostly static, data on a large drive, probably not so much. For a commercial user with a database and a bunch of users, probably. But, neither user should convince themselves that their use of RAID obviates the need for backups. An array is not a backup plan.
 
Old 01-29-2010, 12:21 PM   #6
r3sistance
Senior Member
 
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375

Rep: Reputation: 217Reputation: 217Reputation: 217
Quote:
Originally Posted by Quakeboy02 View Post
An array is not a backup plan.
I can't think of words any more true then this. A worthy note is that a raid mirror will mirror human error! In the industry I have worked in for the past near 3 years. I have seen a few occassions of entire RAID 5TiB+ 5s being deleted/destoryed/formatted/etc just by human error.

To be honest, RAID 0 is generally used by gamers who think they are getting top notch rigs from it, in reality, a raid 0 should only be used as a top level raid over lower raids... IE a raid 10.

Last edited by r3sistance; 01-29-2010 at 12:24 PM.
 
Old 01-29-2010, 01:27 PM   #7
PTrenholme
Senior Member
 
Registered: Dec 2004
Location: Olympia, WA, USA
Distribution: Fedora, (K)Ubuntu
Posts: 4,187

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
The O.P. was about a SATA controller:
Quote:
I'm not all that familiar with what it is that the expensive SATA cards do, other than providing on-board cache and many ports, but all things considered what can't a CPU and lot's-o-RAM do that an expensive SATA card can? This assumes, of course, that you weren't planning on using the CPU to run other applications (i.e. a SAN type setup).
So, you've all provided lots of nice advice re hardware RAID controllers, but not addressed the question raised by the O.P. Why is that?

So, to answer the OP's question: No, you actually have to have the hardware to map the data from the hard drive to the system's RAM and, depending on your needs, connect your drives to the mother board of your system. (That's the "ports" you mentioned.) Depending on how many different SATA drives you need, you can use several small, inexpensive, SATA controllers or one that supports lots of ports. If your need is for lots of drives, the "expensive" option is, probably, actually cheaper. But the "inexpensive" one may eliminate a "single point of failure" problem.
 
Old 01-29-2010, 01:44 PM   #8
r3sistance
Senior Member
 
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375

Rep: Reputation: 217Reputation: 217Reputation: 217
Because, there is not really such a thing as an "expensive SATA card". The OP really meant a SATA Raid controller, as per his later comment of a "SAN setup". Anyways why would you have an "expensive SATA card" when you could get at that point raid cards for similar prices and not losing too many connections or even moving to SAS Raid cards?
 
Old 01-29-2010, 02:45 PM   #9
ineloquucius
LQ Newbie
 
Registered: Jan 2010
Posts: 27

Original Poster
Rep: Reputation: 0
I guess I'm guilty of forming a vague question here. I did in fact mean a SATA RAID controller, vs a software based RAID (using either ordinary SATA cards or even MOBO based controllers). The point about potential compromises in reliability is well taken, but what else can hardware offer over software?

To be even more specific, I'm generally more concerned with latency which, in my experience,is usually more of a issue than throughput is. I find the problem is usually in taking orders and put them up, not in how big the portions are (to use a somewhat feeble analogy). My thought was that software might have an advantage, since motherboards have the capacity to hold many GBytes of RAM. But, for example, does the bus speed undo any such advantages?

Also, I won't be concerned if my CPU resources hover at 40-60%--I'd consider that hardware that's earning its pay-scale. But let me not foment off-topic argument with that; the real question is--reliability aside--what does hardware RAID have over S/W with a dedicated system?

Thanks, btw, for the thoughtful input so far.

Last edited by ineloquucius; 01-29-2010 at 02:46 PM. Reason: Forgot to say thank you.
 
Old 01-29-2010, 04:18 PM   #10
r3sistance
Senior Member
 
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375

Rep: Reputation: 217Reputation: 217Reputation: 217
HW is more reliable, it is faster but the speed advantage is based on exactly what you are doing, the filing system (some FSs s/w better then others) and a few other things. The throughput is going to be a major consideration if you are worried about "lag" since if there is not enough throughput... what do you suffer? H/W will beat out just about anything on the mobo in terms of speed and performance. Also a Raid Card is optimized more then most on-board SATA controllers for sheer out HDD performance.

For it being an expansion card, if it's an expansion card the data is still going HDD > Controller > RAM > CPU Cache (simplified) so the model isn't really complicated by it being a physical expansion card. But add to that, that the data from the Raid Card is served as if it were directly from a hard drive and everything generally already in order, for a software raid, when the data gets to the RAM it's not necessarily in an order and so the CPU has to do work to order that data. It might seem reasonable that if the CPU can handle it that it'd be as fast as a Raid card but you have to order the data before you can use it, with the raid card it is already ordered.

As I said before, the more complicated the raid the more negatively performance is effected for a software raid where for a hardware raid it is positively hit (as long as the raid card supports it). More complex raids with more discs means more discs to read from and to write too, dividing the amount of writing to discs between them, for example a raid 10 with 4 hard drives, the data is split between two raid 1s, however a stream of data potentially can be read from all 4 drives. This sounds like it should be of benefit to both hardware and software raid, but in software raid, you are passing management on to the CPU that has to manage additional organization of the data on both the HDDs (which HDDs to write to, which HDDs to read from) and RAM, once this management has gone passed, even with the benefits of more throughput to and from the HDDs the performance gains are counter-acted to a significant level just from all the additional management.

We have also mentioned that HW is much superior to SW for rebuilds, and if you are running a SAN with more then 4 HDDs the chances are you'll probably see one HDD die at some point, more then 10 HDDs you really need to count on a failed HDD. So you either need to be able to quickly rebuild the raid or have enough hot-spares to handle the lost discs. As mentioned earlier, a raid 0, if it loses a member is completely dead, no matter what kind of raid you are doing while other raids require more then at least 1 disc deaths to be destroyed.

Anyways that is a very long post... while the CPU load might seem reason because the CPU is only 40~60%, remember that the management performed by the H/W card means data streams better to the CPU while in a S/W you read/write data then organize data, then read/write then organize... the CPU load doesn't have to hit 100% for the speeds to become different, even 20% could potentially be enough to see a difference in speed between H/W and S/W raids performance.

Last edited by r3sistance; 01-29-2010 at 04:20 PM.
 
Old 01-29-2010, 07:44 PM   #11
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,361

Rep: Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692Reputation: 3692
You can't beat a true enterprise level hardware raid. The board is made to do what it does as a single purpose device. If you want fast you need that board.
 
Old 01-30-2010, 06:08 PM   #12
ineloquucius
LQ Newbie
 
Registered: Jan 2010
Posts: 27

Original Poster
Rep: Reputation: 0
Quote:
if you are running a SAN with more then 4 HDDs the chances are you'll probably see one HDD die at some point
Yup. Happened on one of four in a RAID 10 that was only a year and a half into service.

Thanks to all for some very comprehensive responses. Much appreciated.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Building High End 3D animation on Linux ssr_2k6 Linux - Desktop 5 03-18-2010 06:03 PM
Are High End Games Available On Linux justmovedtolinux Linux - Games 16 10-14-2008 12:45 AM
LXer: Lenovo embeds Linux in high-end and low-end notebooks LXer Syndicated Linux News 0 08-10-2006 11:21 AM
LXer: Review: Buffalo LinkTheater High-Definition: A Linux Multimedia Center from End to End LXer Syndicated Linux News 0 04-13-2006 03:54 PM
Linux and high end iPaq tribalmasters Linux - Laptop and Netbook 0 10-22-2005 04:13 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 04:52 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration