Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
01-28-2010, 04:38 PM
|
#1
|
LQ Newbie
Registered: Jan 2010
Posts: 27
Rep:
|
Can Linux tools do the work of a high-end (expensive) SATA controller ?
I'm not all that familiar with what it is that the expensive SATA cards do, other than providing on-board cache and many ports, but all things considered what can't a CPU and lot's-o-RAM do that an expensive SATA card can? This assumes, of course, that you weren't planning on using the CPU to run other applications (i.e. a SAN type setup).
|
|
|
01-28-2010, 05:13 PM
|
#2
|
Senior Member
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,410
Rep: 
|
The system can do just about everything a hardware RAID controller can do. There is a situation when using a software RAID-1 array containing your system files ("/", "/boot", etc), where, if you don't have the mirror setup properly, you might not be able to boot from it if the primary goes down. Of course, you should be able to use a liveCD to fix things, in such a situation. Other than that (mostly a time and convenience issue) it's just CPU cycles and IO bandwidth consumption.
Last edited by Quakeboy02; 01-28-2010 at 05:14 PM.
|
|
|
01-29-2010, 01:59 AM
|
#3
|
Senior Member
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375
|
With alot of experience with RAID, I would say that Hardware Raid cards are alot more stable then any other form of Raid. Software Raid works suitably well but host/fake and mobo embedded raid cause more issues then anybody should ever, ever have to see. As it goes, my experience of host/fake raid and embedded raid says it usually doesn't work properly with CentOS and RHEL very well, not sure about other distributions.
Hardware Raid is dedicated to the raid function and lowers the burden from the CPU and IO, it should be noted that more advanced/complex raid levels tend to produce better results on hardware raid where with software raid the increase in CPU load tends to have the reserve effect of lowing performance. Generally when you are talking about reaching a point where the performance would be truly be noticable the CPU would likely already be in a state under load. Hardware Raid cards are also more specialised in areas like Disc performance, more so under Raid conditions.
Lastly hardware raid's tend to be much better at rebuilding then software raids, software raid everything will likely need to be manually reconfigured where a hardware raid would more likely do the rebuilding automatically with less work and less raid down time.
|
|
1 members found this post helpful.
|
01-29-2010, 11:20 AM
|
#4
|
LQ Guru
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,311
|
I totally agree with r3sistance on this one.
"Dude, this is your priceless data we're talking about!" If you're going to spend money on anything at all, spend it on the best, most reliable and speedy hardware you can buy. This is not the right place to "cut corners."
|
|
|
01-29-2010, 11:46 AM
|
#5
|
Senior Member
Registered: Nov 2006
Distribution: Debian Linux 11 (Bullseye)
Posts: 3,410
Rep: 
|
I also totally agree with r3sistance, but not so much with sundialsvcs.
Whether your arrays are on hardware or software controllers shouldn't be an impact to their reliability. If you use a RAID0 you're taking a big risk, no matter what. OTOH, if you're running a server, then you have no business using softraids, because the time and effort to restore an array, or a system, may have an unreasonable impact on your customers.
For the home user, there's the issue of dependence on a difficult to get controller. Will your system be down for a week if your controller fails?
There are a number of issues to be considered when thinking about running RAID. One big issue is whether you really gain anything in your particular situation. For the home user with a bunch of DVDs or other, mostly static, data on a large drive, probably not so much. For a commercial user with a database and a bunch of users, probably. But, neither user should convince themselves that their use of RAID obviates the need for backups. An array is not a backup plan.
|
|
|
01-29-2010, 12:21 PM
|
#6
|
Senior Member
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375
|
Quote:
Originally Posted by Quakeboy02
An array is not a backup plan.
|
I can't think of words any more true then this. A worthy note is that a raid mirror will mirror human error! In the industry I have worked in for the past near 3 years. I have seen a few occassions of entire RAID 5TiB+ 5s being deleted/destoryed/formatted/etc just by human error.
To be honest, RAID 0 is generally used by gamers who think they are getting top notch rigs from it, in reality, a raid 0 should only be used as a top level raid over lower raids... IE a raid 10.
Last edited by r3sistance; 01-29-2010 at 12:24 PM.
|
|
|
01-29-2010, 01:27 PM
|
#7
|
Senior Member
Registered: Dec 2004
Location: Olympia, WA, USA
Distribution: Fedora, (K)Ubuntu
Posts: 4,187
|
 The O.P. was about a SATA controller:
Quote:
I'm not all that familiar with what it is that the expensive SATA cards do, other than providing on-board cache and many ports, but all things considered what can't a CPU and lot's-o-RAM do that an expensive SATA card can? This assumes, of course, that you weren't planning on using the CPU to run other applications (i.e. a SAN type setup).
|
So, you've all provided lots of nice advice re hardware RAID controllers, but not addressed the question raised by the O.P. Why is that?
So, to answer the OP's question: No, you actually have to have the hardware to map the data from the hard drive to the system's RAM and, depending on your needs, connect your drives to the mother board of your system. (That's the "ports" you mentioned.) Depending on how many different SATA drives you need, you can use several small, inexpensive, SATA controllers or one that supports lots of ports. If your need is for lots of drives, the "expensive" option is, probably, actually cheaper. But the "inexpensive" one may eliminate a "single point of failure" problem.
|
|
|
01-29-2010, 01:44 PM
|
#8
|
Senior Member
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375
|
Because, there is not really such a thing as an "expensive SATA card". The OP really meant a SATA Raid controller, as per his later comment of a "SAN setup". Anyways why would you have an "expensive SATA card" when you could get at that point raid cards for similar prices and not losing too many connections or even moving to SAS Raid cards?
|
|
|
01-29-2010, 02:45 PM
|
#9
|
LQ Newbie
Registered: Jan 2010
Posts: 27
Original Poster
Rep:
|
I guess I'm guilty of forming a vague question here. I did in fact mean a SATA RAID controller, vs a software based RAID (using either ordinary SATA cards or even MOBO based controllers). The point about potential compromises in reliability is well taken, but what else can hardware offer over software?
To be even more specific, I'm generally more concerned with latency which, in my experience,is usually more of a issue than throughput is. I find the problem is usually in taking orders and put them up, not in how big the portions are (to use a somewhat feeble analogy). My thought was that software might have an advantage, since motherboards have the capacity to hold many GBytes of RAM. But, for example, does the bus speed undo any such advantages?
Also, I won't be concerned if my CPU resources hover at 40-60%--I'd consider that hardware that's earning its pay-scale. But let me not foment off-topic argument with that; the real question is--reliability aside--what does hardware RAID have over S/W with a dedicated system?
Thanks, btw, for the thoughtful input so far.
Last edited by ineloquucius; 01-29-2010 at 02:46 PM.
Reason: Forgot to say thank you.
|
|
|
01-29-2010, 04:18 PM
|
#10
|
Senior Member
Registered: Mar 2004
Location: UK
Distribution: CentOS 6/7
Posts: 1,375
|
HW is more reliable, it is faster but the speed advantage is based on exactly what you are doing, the filing system (some FSs s/w better then others) and a few other things. The throughput is going to be a major consideration if you are worried about "lag" since if there is not enough throughput... what do you suffer? H/W will beat out just about anything on the mobo in terms of speed and performance. Also a Raid Card is optimized more then most on-board SATA controllers for sheer out HDD performance.
For it being an expansion card, if it's an expansion card the data is still going HDD > Controller > RAM > CPU Cache (simplified) so the model isn't really complicated by it being a physical expansion card. But add to that, that the data from the Raid Card is served as if it were directly from a hard drive and everything generally already in order, for a software raid, when the data gets to the RAM it's not necessarily in an order and so the CPU has to do work to order that data. It might seem reasonable that if the CPU can handle it that it'd be as fast as a Raid card but you have to order the data before you can use it, with the raid card it is already ordered.
As I said before, the more complicated the raid the more negatively performance is effected for a software raid where for a hardware raid it is positively hit (as long as the raid card supports it). More complex raids with more discs means more discs to read from and to write too, dividing the amount of writing to discs between them, for example a raid 10 with 4 hard drives, the data is split between two raid 1s, however a stream of data potentially can be read from all 4 drives. This sounds like it should be of benefit to both hardware and software raid, but in software raid, you are passing management on to the CPU that has to manage additional organization of the data on both the HDDs (which HDDs to write to, which HDDs to read from) and RAM, once this management has gone passed, even with the benefits of more throughput to and from the HDDs the performance gains are counter-acted to a significant level just from all the additional management.
We have also mentioned that HW is much superior to SW for rebuilds, and if you are running a SAN with more then 4 HDDs the chances are you'll probably see one HDD die at some point, more then 10 HDDs you really need to count on a failed HDD. So you either need to be able to quickly rebuild the raid or have enough hot-spares to handle the lost discs. As mentioned earlier, a raid 0, if it loses a member is completely dead, no matter what kind of raid you are doing while other raids require more then at least 1 disc deaths to be destroyed.
Anyways that is a very long post... while the CPU load might seem reason because the CPU is only 40~60%, remember that the management performed by the H/W card means data streams better to the CPU while in a S/W you read/write data then organize data, then read/write then organize... the CPU load doesn't have to hit 100% for the speeds to become different, even 20% could potentially be enough to see a difference in speed between H/W and S/W raids performance.
Last edited by r3sistance; 01-29-2010 at 04:20 PM.
|
|
|
01-29-2010, 07:44 PM
|
#11
|
Moderator
Registered: Mar 2008
Posts: 22,361
|
You can't beat a true enterprise level hardware raid. The board is made to do what it does as a single purpose device. If you want fast you need that board.
|
|
|
01-30-2010, 06:08 PM
|
#12
|
LQ Newbie
Registered: Jan 2010
Posts: 27
Original Poster
Rep:
|
Quote:
if you are running a SAN with more then 4 HDDs the chances are you'll probably see one HDD die at some point
|
Yup. Happened on one of four in a RAID 10 that was only a year and a half into service.
Thanks to all for some very comprehensive responses. Much appreciated.
|
|
|
All times are GMT -5. The time now is 04:52 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|