scsi device queue_depth effect on performance
I have a system with a few LSI controllers. There are 3 RAIDs in the system with different number and types of drives with different performance characteristics. I noticed that reducing the queue_depth from 128 for scsi devices to something really small like 2 or 4 seems to help the overall performance when read or writing to all of the RAIDs simultaneously.
Performance seems fine with only a couple of RAIDs, but when I add more there seems to be some system resource that causes a bottleneck. When I run into this issue for some reasons reducing the queue_depth from 128 to 2, 3, or 4 for all of the drives helps alleviate some if not almost all of the bottle-necking.
First of all this seems rather weird since having a larger queue depths usually helps performance as apposed to hurt them.
Can anyone explain this behavior to me?
Is there some other queue depth I need to increase in the scsi or block layer when having lots of drives and RAIDs?
Last edited by dbrazeau; 11-06-2012 at 03:53 PM.
|