Welcome to the most active Linux Forum on the web.
Go Back > Forums > Linux Forums > Linux - Software
User Name
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.


  Search this Thread
Old 10-27-2010, 08:42 PM   #1
Registered: Aug 2009
Distribution: Fedora, OpenSuse, DENX Embedded Linux
Posts: 184

Rep: Reputation: 28
RAID 5 with even number of drives gives bad write performance. Why?

So I have been doing some RAID 5 performance testing and am getting some bad write performance when configuring the RAID with an even number of drives. I'm running kernel 2.6.30 with software based RAID 5. Here are my performance results:

3 drives: 173 MB/s
4 drives: 123 MB/s
5 drives: 205 MB/s
6 drives: 116 MB/s

This seems rather odd and doesn't make much since to me. For RAID 0 my performance consistently increases as I add more drives, but this is not the case for RAID 5.

Does anyone know why I might be seeing lower performance when constructing my RAID 5 with 4 or 6 drives rather than 3 or 5?
Old 10-27-2010, 10:27 PM   #2
Senior Member
Registered: Jan 2005
Location: Melbourne, Australia
Distribution: Debian Buster (Fluxbox WM)
Posts: 1,390
Blog Entries: 52

Rep: Reputation: 359Reputation: 359Reputation: 359Reputation: 359
The performance on writes to RAID 5 is bottlenecked by the write to the parity.

How it behaves with different numbers of drives will depend on the write block size. With large blocks (equivalent to a multiple of a stripe across all the drives), you will hit all the drives equally with each write. But if the stripe doesn't divide into the block size, then for each write some of the drives will be written less than others, which will decrease performance.

So for large write blocks, drive numbers such as 3, 5, 9, 17, etc (2^n+1) will work better than others, because the block size will be a multiple of the sum across the stripe.

For small write blocks (eg writing to only a single drive + parity), the number of drives will not matter. This will not improve best case performance for sequential writes (since the parity is still the bottleneck), but will help for random write access patterns (where the parity disk will often differ between writes).

Last edited by neonsignal; 10-27-2010 at 10:29 PM.
Old 10-28-2010, 11:37 AM   #3
Registered: Aug 2009
Distribution: Fedora, OpenSuse, DENX Embedded Linux
Posts: 184

Original Poster
Rep: Reputation: 28
Thanks for the great response neonsignal. So I took your suggestion and used a 1.5MB (3 * 512K) test block size for a 4 drive RAID5 and performance went up to 205 MB/s. Previously I was using a 1MB test block size which was (2 * 512K), and not an even multiple for an even number drive RAID 5.


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Installing new RAID card causes existing arrays to have bad magic number anon091 Linux - Hardware 5 06-15-2010 04:43 PM
Would setting these drives in a RAID array give me significant performance increase? annihilan Linux - Hardware 1 12-06-2009 07:10 AM
Either bad raid card or bad drives... mcd Linux - Hardware 4 09-12-2008 06:26 PM
bad SATA write performance knurpsl Linux - Hardware 1 01-19-2007 03:57 AM
[Fedora Core 3]Poor write performance with raid controller TomG22 Linux - Hardware 1 09-22-2005 09:25 AM > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 05:40 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration