MD RAID-5 and RAID-10 are both 1/2 the speed of RAID-0
Doing large reads only, with a variety of MD chunk sizes and read block sizes, I continue to see:
A N disk RAID-0 array has about 85% of the speed of N*a component disk's speed. That's not great but sure there will be some overhead, so ok so far.
Also a 2 disk RAID-1, gets pretty much the same read performance as its components, also good.
But a 7 disk RAID-5 array has 50% of the read bandwidth of a 6 disk RAID-0 array. Is it checking bit-xoring even if it gets no errors from the disk? Everything is active with no recovery going on.
And an 8 element RAID-10 array (8 RAID-1 arrays, put into a RAID-0 array) also only does about 1/2 of the performance of a simple RAID-0 stripe made of the same mirror pieces. The RAID-10 is composed of mirrors on the same set of ssd's (e.g. mirror pairs are sda1+sdb2, sdb1+sdc2 ...sdh1+sda2). Is it doing some sort of locking so when it reads from the first mirror pair (the first chunk of the RAID-10 stripe) it locks the other mirror half, so sdb is now locked causing the concurrently issued read on sdb1 (the next chunk) to be serialized?).
If I test each mirror pair individually they each get the same ~350MB/s. If I pick 2 pairs that share a drive (e.g. sda1+sdb2 and sdb1+sdc2) and read from each concurrently, they also run as expected at about 350 each. But if I do all 8 concurrently, 4 run at 350 and 4 run at 170MB/s. So that is likely why RAID-10 is slow, but how is this concurrency slowing things down if its not some sort of shared drive serialization.
This is all on RHEL 5.5, but I don't think md is too different across distributions.
I would look past the OS and start thinking about the controller settings. For speed on any RAID array you should set the read/write policy to Write Through and Adaptive Read Ahead. You may also want to think about turning on caching in case of a crash...what hardware are you running on?
|All times are GMT -5. The time now is 04:42 PM.|