LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (https://www.linuxquestions.org/questions/linux-software-2/)
-   -   RAID 5 with even number of drives gives bad write performance. Why? (https://www.linuxquestions.org/questions/linux-software-2/raid-5-with-even-number-of-drives-gives-bad-write-performance-why-840866/)

dbrazeau 10-27-2010 08:42 PM

RAID 5 with even number of drives gives bad write performance. Why?
 
So I have been doing some RAID 5 performance testing and am getting some bad write performance when configuring the RAID with an even number of drives. I'm running kernel 2.6.30 with software based RAID 5. Here are my performance results:

3 drives: 173 MB/s
4 drives: 123 MB/s
5 drives: 205 MB/s
6 drives: 116 MB/s

This seems rather odd and doesn't make much since to me. For RAID 0 my performance consistently increases as I add more drives, but this is not the case for RAID 5.

Does anyone know why I might be seeing lower performance when constructing my RAID 5 with 4 or 6 drives rather than 3 or 5?

neonsignal 10-27-2010 10:27 PM

The performance on writes to RAID 5 is bottlenecked by the write to the parity.

How it behaves with different numbers of drives will depend on the write block size. With large blocks (equivalent to a multiple of a stripe across all the drives), you will hit all the drives equally with each write. But if the stripe doesn't divide into the block size, then for each write some of the drives will be written less than others, which will decrease performance.

So for large write blocks, drive numbers such as 3, 5, 9, 17, etc (2^n+1) will work better than others, because the block size will be a multiple of the sum across the stripe.

For small write blocks (eg writing to only a single drive + parity), the number of drives will not matter. This will not improve best case performance for sequential writes (since the parity is still the bottleneck), but will help for random write access patterns (where the parity disk will often differ between writes).

dbrazeau 10-28-2010 11:37 AM

Thanks for the great response neonsignal. So I took your suggestion and used a 1.5MB (3 * 512K) test block size for a 4 drive RAID5 and performance went up to 205 MB/s. Previously I was using a 1MB test block size which was (2 * 512K), and not an even multiple for an even number drive RAID 5.


All times are GMT -5. The time now is 03:48 AM.