Unusual RAID50 / RAID5 performance with md
Not sure if this is going to turn out to be a hardware or a software issue, but since my config is the same for both raids here goes...
I have a 6 disk RAID50 set up from 2 x 3 disk RAID5's. Basically it looks like this: md2: ->md0: --->sdb1 --->sdc1 --->sdd1 ->md1: --->sde1 --->sdf1 --->sdg1 However sometimes I don't think it's as fast as it sounds. The Drives are 160GB SATA each. I did a hdparm -t on /dev/md0, /dev/md1 and /dev/md2. Confusingly I get the following numbers: md0: 108 MB/s md1: 80 MB/s md2: 66 MB/s md2 should be the fast one, as it is the 6 disk raid50, the other two are only "sub-raids" of this. I thought they would be slower? Does anyone have an opinion on this? And most strangely of all - why would md0 be so much faster than md1? They are both software raid 5! The numbers are repeatable. |
Clarifications
Oops, forgot to mention:
This is with kernel 2.6.11-1.1369 (fc4). cat /proc/mdstat shows: md2 : active raid0 md0[0] md1[1] 639804288 blocks 64k chunks md1 : active raid5 sdg1[2] sdf1[1] sde1[0] 319902208 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU] md0 : active raid5 sdd1[2] sdc1[1] sdb1[0] 319902208 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU] so that is fine. The controllers are generic $20 sata cards. Some of the drives are hanging directly off the motherboard. HANG ON! Could that be the answer? Anyone have any insight as to whether this could be a PCI bottleneck? I think i just answered my own question. However, this does not answer why the stripe set is so much slower than the individual components (md0 and md1). |
All times are GMT -5. The time now is 04:15 PM. |