Software Raid 6 - poor read performance / fast write performance
Hi everyone,
I am currently in the process of upgrading my home fileserver from 8x 500GB Samsung T166 drives to 5x Samsung EcoGreen F4 2TB drives.
After some reading I decided to go with raid 6 this time (the old array is raid 5) and use Ubuntu Server 10.10 (I used Ubuntu Server 7.04 till now and it worked very well so far).
My problem now is that the read performance of the new array is very poor.
Here are some numbers from testing the different drives and arrays with hdparm and dd:
old raid 5 array: 190 MB/s (the old array was faster but to acommodate all 15 drives I had to borrow a PCI sata controller which slows down performance a bit)
single old 500GB drive: 80-90 MB/s
new raid 6 array: 70-80 MB/s
single new 2TB drive: 130 MB/s
So the new array is way slower when reading than a single drive which is not at all what I expected. At the moment there are only 4 drives in the array as one was dead upon delivery.
First I thought that maybe the CPU is somehow the bottleneck as raid 6 requires a bit more processing power but that should mainly affect writes.
While write performance at first was poor too it dramatically increased to ~180 MB/s after I set the stripe_cache_size to 8192 (from 256).
For the new array I created a partition starting at block 2048 with fdisk on each drive (leaving off ~100MB at the end to be on the safe side should a future drive be a few blocks short) to account for the 4k sector alignment.
Then I created the array using these partitions using a block size of 128k and metadata version 1.2 (as that is not the default with the outdated mdadm version coming with Ubuntu 10.10).
I use ext4 on the new array and ext 3 on the old one with the appropriate stride and stripe-width values.
Does anyone have an idea what could cause the poor performance of the new array?
Are there any known problems with the raid6 implementation?
Best regards and thanks in advance,
Kvothe
|