scsi device queue_depth effect on performance
I have a system with a few LSI controllers. There are 3 RAIDs in the system with different number and types of drives with different performance characteristics. I noticed that reducing the queue_depth from 128 for scsi devices to something really small like 2 or 4 seems to help the overall performance when read or writing to all of the RAIDs simultaneously.
Performance seems fine with only a couple of RAIDs, but when I add more there seems to be some system resource that causes a bottleneck. When I run into this issue for some reasons reducing the queue_depth from 128 to 2, 3, or 4 for all of the drives helps alleviate some if not almost all of the bottle-necking. First of all this seems rather weird since having a larger queue depths usually helps performance as apposed to hurt them. Can anyone explain this behavior to me? Is there some other queue depth I need to increase in the scsi or block layer when having lots of drives and RAIDs? |
Oh, should also probably note these are all RAID 0 arrays.
|
So it looks like reducing the queue_depth of the drives results in more I/O requests being merged which leads to an increase to overall performance.
|
All times are GMT -5. The time now is 11:38 AM. |