LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 03-13-2011, 02:02 PM   #1
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Why not a 3-way or even 4-way RAID level 1 (mirror)?


Why could there not be a 3-way or even 4-way RAID level 1 (mirror)? It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.

I recently tried to configure a 3ware 9650SE RAID controller. I selected all 3 drives. Then RAID 1 was not presented as an option. Only RAID 0 (striping, no redundancy) and RAID 5 (one level of redundancy, low performance). Is there some engineer who thinks "triple redundancy is a waste, so I'm not going to let them do that"? Or is it a manager?

Mirror RAID should be simple, even when more than 2 drives are used. The data is simply written in parallel to all the drives in the mirror set, and read from one of the drives (with load balancing over parallel and/or read-ahead operations to improve performance, though some of this is in question, too).
 
Old 03-13-2011, 02:07 PM   #2
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
if you think that 2 drives failing is feasible, then something is surely very wrong with the hardware you're using. Raid 1 specifies 2 drives, so you can't use more than that and it still be raid 1. What you can do though is have spare devices if you have that little confidence in your drives. So if one fails, one of a pool of other drives can be used to recreate the mirror.
 
Old 03-13-2011, 02:26 PM   #3
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by acid_kewpie View Post
if you think that 2 drives failing is feasible, then something is surely very wrong with the hardware you're using. Raid 1 specifies 2 drives, so you can't use more than that and it still be raid 1. What you can do though is have spare devices if you have that little confidence in your drives. So if one fails, one of a pool of other drives can be used to recreate the mirror.
It's more a case of I might not be able to get to the machine that has the failure for many days. The hot spares sounds like a viable option.

Still, the logic required to implement mirroring would be trivial for N-way. It makes no sense not to have it as an option.

Last edited by Skaperen; 03-13-2011 at 02:27 PM.
 
Old 03-13-2011, 02:31 PM   #4
acid_kewpie
Moderator
 
Registered: Jun 2001
Location: UK
Distribution: Gentoo, RHEL, Fedora, Centos
Posts: 43,417

Rep: Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985Reputation: 1985
Quote:
Originally Posted by Skaperen View Post
It's more a case of I might not be able to get to the machine that has the failure for many days. The hot spares sounds like a viable option.

Still, the logic required to implement mirroring would be trivial for N-way. It makes no sense not to have it as an option.
You do have the option of using mirroring within LVM, that does allow multiple mirrors to be created, not just one. It doesn't say in the guide here if it's limited to 2 mirrors, but may well not be practically limited at all... http://www.centos.org/docs/5/html/Cl...LV_create.html

As for RAID not supporting it, RAID needs to fit into standards, you would want to ratify a standard for that kind of RAID if it were to be implemented, and as the world has kept turning this long, there seems little chance of that ever happening.
 
Old 03-13-2011, 02:47 PM   #5
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Skaperen View Post
RAID 5 (one level of redundancy, low performance).
Why is RAID 5 low performance?

I know about it more in theory than in practice, so maybe I'm missing something. Are you worried about the disk performance or the extra computation?

The extra computation is just for write isn't it? For read you have a clean copy of each sector. Compute power is pretty inexpensive (at the level of extra computations needed for RAID 5). Even for software RAID, I don't think it is inefficient.

The disk performance for read is similar to RAID 0 (and like RAID 0 normally better than non read). Large transfers distribute over 3 drives getting the same transfer rate benefits as 3 way RAID 0, and less than RAID 0 (but still some) benefits relative to head movement time.

For large writes, you would write one sector to each drive for each two sectors of actual data. That sounds twice as efficient as RAID 1.

Scattered small writes may be quite a bit uglier. But typical filesystem workloads don't include a large fraction of scattered small writes.

Quote:
Is there some engineer who thinks "triple redundancy is a waste,
Yes.

Quote:
so I'm not going to let them do that
Not exactly.

Obscure features that few customers want are extra opportunities to put bugs in the system. Focusing on what most customers want is usually better engineering.

Last edited by johnsfine; 03-13-2011 at 02:48 PM.
 
Old 03-13-2011, 03:05 PM   #6
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by acid_kewpie View Post
You do have the option of using mirroring within LVM, that does allow multiple mirrors to be created, not just one. It doesn't say in the guide here if it's limited to 2 mirrors, but may well not be practically limited at all... http://www.centos.org/docs/5/html/Cl...LV_create.html

As for RAID not supporting it, RAID needs to fit into standards, you would want to ratify a standard for that kind of RAID if it were to be implemented, and as the world has kept turning this long, there seems little chance of that ever happening.
Fitting into standards does not mean denying what could be done. A 3-way RAID 1 does provide better redundancy. There's no reason for a standard to say "3-way is not allowed". It can simply say "2-way is required", and let the manufacturer decide on any more than that. And maybe it was their decision and they made it. But it is silly to prohibit it.

As it stands now, RAID standardization doesn't mean much, anyway. You can't move drives from one brand of RAID controller to another. And the 3ware controllers don't even have true JBOD (oh, wait, there is no such standard ... yeah, that's helpful). I wonder if someone's zeal to be strictly compliant with RAID forced the removal of true JBOD.
 
Old 03-13-2011, 03:19 PM   #7
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by johnsfine View Post
Why is RAID 5 low performance?

I know about it more in theory than in practice, so maybe I'm missing something. Are you worried about the disk performance or the extra computation?
I've experienced horrible performance with hardware RAID 5. I learned about the logic needed in RAID 5 and figured out the theoretical reduction in performance. Real life performance was worse, but I never found out why. I no longer use RAID 5 unless there is an extreme need for space and no need for performance (although I have found that the performance issues are not as bad as the drive factor goes up).

Quote:
Originally Posted by johnsfine View Post
The extra computation is just for write isn't it? For read you have a clean copy of each sector. Compute power is pretty inexpensive (at the level of extra computations needed for RAID 5). Even for software RAID, I don't think it is inefficient.
I would think so. But I wonder if maybe that is what is slowing down hardware RAID controllers. I'd think they'd have some awesome hardwired CPU instructions to do the parity calculations. Maybe not.

Quote:
Originally Posted by johnsfine View Post
The disk performance for read is similar to RAID 0 (and like RAID 0 normally better than non read). Large transfers distribute over 3 drives getting the same transfer rate benefits as 3 way RAID 0, and less than RAID 0 (but still some) benefits relative to head movement time.
Are you speaking theoretical, or actual experienced performance? I'd think RAID 0 would perform quite well, up to the controller's capacity. RAID 1 should for multiple reads, too. RAID 5 does do much better at reading than writing, although I still see RAID 5 performing well below what it theoretically should.

Quote:
Originally Posted by johnsfine View Post
For large writes, you would write one sector to each drive for each two sectors of actual data. That sounds twice as efficient as RAID 1.
Yes. Writing 4MB to RAID 0 should write 2MB to drive A and 2MB to drive B. RAID 1 has to write all the same data to both, so no performance gain there, although reading should have a lot.

Quote:
Originally Posted by johnsfine View Post
Scattered small writes may be quite a bit uglier. But typical filesystem workloads don't include a large fraction of scattered small writes.
It should be scattered according to the stride size in RAID 0 (similar in RAID 5). RAID 1 doesn't have a stride. So if I read 4MB, it can get 2MB from drive A and 2MB from drive B in just about any arrangement (yet in practice I see only one drive active and little more than one drive of performance). Maybe the I/O is getting broken up too much somewhere.

Quote:
Originally Posted by johnsfine View Post
Obscure features that few customers want are extra opportunities to put bugs in the system. Focusing on what most customers want is usually better engineering.
ISTM that 3-way RAID 1 would not be one of those.

Well, I suppose maybe they should focus on getting real life performance up to the theoretical levels, first.

Last edited by Skaperen; 03-13-2011 at 03:21 PM.
 
Old 03-13-2011, 03:43 PM   #8
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Skaperen View Post
I'd think RAID 0 would perform quite well, up to the controller's capacity. RAID 1 should for multiple reads, too.
I've only done significant performance testing for RAID 0 and 1 and only for a specific brand/model of fake RAID in Windows. The RAID 0 was significantly faster than JBOD (though not by as much as it should have been) in both reading and writing. The RAID 1 was slightly slower than JBOD for reading and significantly slower for writing. I expect the algorithms were incredibly stupid. With workloads of mainly read, any sane distribution of the read workload across the two drives should do much better than JBOD. Doing worse than JBOD must have taken a level of bad design I can't quite visualize despite seeing the results.

I have always assumed open source Linux software RAID could not be written with a level of stupidity approaching the above. But in fact I neither reviewed the source code nor benchmarked the performance.

Quote:
Writing 4MB to RAID 0 should write 2MB to drive A and 2MB to drive B. RAID 1 has to write all the same data to both,
I was talking about RAID 5, but you seem to be talking about RAID 0.

Writing 4MB to RAID 5 should write 2MB to each of drives A, B and C, where that 6MB total consists of the 4MB of data plus 2MB of parity.

Quote:
Originally Posted by johnsfine View Post
Scattered small writes may be quite a bit uglier.
Quote:
It should be scattered according to the stride size in RAID 0 (similar in RAID 5).
My point was that scattered small in RAID 5 is not similar to RAID 0. In RAID 0, a small write goes to whichever one of drive A or B it happens to land on and that is it.

In RAID 5, a small write to drive A means you must also read corresponding data from drive B before writing new parity to drive C. That may be bad enough that if you expect scattered small writes to be common, you might reject RAID 5.

Last edited by johnsfine; 03-13-2011 at 03:44 PM.
 
Old 03-13-2011, 06:00 PM   #9
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
First: RAID1 can be used with 'n' drives, not just 2.
Second: You can also use nested levels in RAID... like making a RAID1 out of 2 (or more) RAID1 devices...
Although I personally see no logic in this, it CAN be done
Third: RAID1 has a slight 'read' advantage over RAID5, which will decrease as the number of drives increases.
On the other hand, RAID5 has a 'write' advantage over RAID1, except if there are many small writes (smaller than the RAID5's stripe size).

If you value performance over size, then a RAID1+0 is the best option for you as it has the best performance in 'read' as well as 'write' of all RAID levels.
The minimum number of drives for a RAID1+0 is 4 (make a RAID0 out of 2 RAID1 (each RAID1 has 2 drives)).
 
Old 03-15-2011, 01:00 PM   #10
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by Slax-Dude View Post
First: RAID1 can be used with 'n' drives, not just 2.
I would agree RAID1 conceptually (theoretically) would/should allow this. It's implementations that I see not doing so.
 
Old 03-15-2011, 01:07 PM   #11
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by Skaperen View Post
I would agree RAID1 conceptually (theoretically) would/should allow this. It's implementations that I see not doing so.
Dude, I just tried it with software RAID.
It works as advertised
 
Old 03-15-2011, 03:05 PM   #12
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
I've seen 3, or maybe 4 by now, implementations of software RAID. Surely by now 1 or 2 of them have gotten it right? Hardware implementations, though, seem to be the big problem. I did post this in the hardware section.
 
Old 03-16-2011, 04:10 AM   #13
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by Skaperen View Post
I did post this in the hardware section.
You sure did...
...although...
Quote:
Originally Posted by Skaperen View Post
It seems every hardware (and at least the software I tested a few years ago) RAID controller only supports a 2-way mirror.
...you did mention software implementations as well, implying that _none_ followed RAID specs correctly.

I only answered that RAID1 specs _do_ allow for more that 2 drives, and that software RAID1 on linux works as advertised, hoping it would help you in your particular case.
If you already knew software RAID1 works with more that 2 drives, then your first post is misleading

If hardware RAID1 is not implemented correctly in your case, you have 3 choices:
1) talk with the manufacturer of your RAID controller
2) switch manufacturer (or model) of your RAID controller
3) use software RAID, as with RAID1 there is no benefit of using hardware vs software RAID

My advice: go with number 3
 
Old 03-16-2011, 07:47 AM   #14
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681

Original Poster
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Yeah, I guess I left that impression. But I have only dabbled in software RAID. Until recently, hardware (e.g. IDE) didn't let software parallel I/O very well. Now days, I've seen that do much better, to the point I can see only minimal reduction in I/O to one device when hitting another device. So maybe now software RAID is more viable. But I'm also a "keep it simple" oriented person, so doing LVM just to do RAID is not my style.

I've never seen anything in RAID specs that disallowed more than 2x redundancy in mirroring. But it's not the specs I'm concerned about; it's the implementations in hardware (so technically, not really appropriate for LQ ... just wanting to get a Linux administrator's perspective on this).

1. That's always hard to do.
2. Who has N-way mirroring and all the other stuff?
3. I have in the past seen hardware as the advantage.

I still have concerns with software RAID. One is that I/O drivers are still structured in a way that involves duplicate buffers. Has that been solved? Also, available controllers that are just plain controllers, and have high performance, seems to be limited.
 
Old 03-16-2011, 01:44 PM   #15
Slax-Dude
Member
 
Registered: Mar 2006
Location: Valadares, V.N.Gaia, Portugal
Distribution: Slackware
Posts: 528

Rep: Reputation: 272Reputation: 272Reputation: 272
Quote:
Originally Posted by Skaperen View Post
But I'm also a "keep it simple" oriented person, so doing LVM just to do RAID is not my style.
Who mentioned LVM?
You don't need it for software RAID.
Quote:
Originally Posted by Skaperen View Post
I still have concerns with software RAID. One is that I/O drivers are still structured in a way that involves duplicate buffers. Has that been solved?
Never heard of it. Got some links?
Quote:
Originally Posted by Skaperen View Post
Also, available controllers that are just plain controllers, and have high performance, seems to be limited.
Care to clarify?

I use software RAID on my servers because it is much more flexible.
From setting-up reporting drive failures via email so I can act faster to complex nested RAID levels... I'm in total control.
I can even make RAID devices out of partitions or block devices located on other servers, if I so desire.

I really don't see the advantage of hardware RAID over software RAID, unless I have a insanely high-end controller.
Even then, I would depend on what ever options the manufacturer 'thinks' should be available....

Last edited by colucix; 03-18-2011 at 10:21 AM. Reason: Restored original post as per user request.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Setting up RAID or mirror drives on the Hardware level Pricy Linux - Newbie 1 09-16-2010 07:06 PM
Dell/Intel ICH7 soft-RAID and mdadm raid-level mistake PhilipTheMouse Linux - General 0 03-14-2009 05:59 PM
About Mirror & RAID shipon_97 Linux - Newbie 14 02-14-2007 01:20 AM
What to Mirror - RAID 1 SBFree Linux - Newbie 4 01-14-2006 07:17 PM
Remove raid and mirror Chaiyakorn Linux - General 0 07-06-2004 01:59 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 07:06 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration