Thanks for your reply, Electro. We were previously using raid 6, but when hosting around 1000 ish sites you really start to notice the IO wait lag. Using the 'sar' command provided by the 'sysstat' RPM package, we're seeing 10 minute %iowait shooting up to 30% or 40% at times. So we're switching to raid 10. According to dbench and bonnie++, the throughput is higher and latency is much lower using raid 10 on our hardware (IBM x3650).
When we setup a new machine with raid 10, I took the time to do dbench tests with and without stride and stripe-width settings set. The system is hardware raid 10, 6 disks, 256 KB stripe size, and a block size of 4096 . According to
http://uclibc.org/~aldot/mkfs_stride.html , we should be using stride=64 and stripe-width=192 . I ran dbench 10 times in a row and saved the output into text files (with and without stide/stripe-width set, for a total of 20 times). The exact command was: dbench 10 -t 60 -c /usr/local/share/client.txt . I then created a Perl script to find the minimum, average, and maximum of the throughput and latency of all of the 'execute' lines in each of the dbench runs. The results were:
Code:
RAID10 and noatime turned on:
Minimum Average Maximum
Throughput: 100.26 174.163508474576 222.91 (MB/sec)
Latency: 46.232 231.828379661017 1176.877 (ms)
Code:
RAID10 and noatime turned on and stride / stripe-width set:
Minimum Average Maximum
Throughput: 77.13 158.045508474576 194.74 (MB/sec)
Latency: 39.927 251.216361016949 1432.605 (ms)
So it appears to have made very little difference. In fact, it appears to have made things a bit worse. Which is why I wonder if stride / stripe-width does anything at all in ext3? I was asking about LVM as I was wondering if using LVM has any effect on the stride / stripe-width calculations, since LVM creates various data structures that will consume disk space, and perhaps create some sort of 'offset' that needs to be taken into account when doing the stride / stripe-width calculations.
And you're saying "From reading some sites, "dumpe2fs -h" does in fact provide information. You should pipe it through grep." Do you happen to have any links to those sites handy? I have run "dumpe2fs -h", "dumpe4fs -h", and "tune4fs -l" with "grep -i stride" and "grep -i stripe" and didn't get any results.