Quote:
Originally Posted by stefan_nicolau
Code:
dd if=/dev/sda of=/dev/null bs=1M&
iperf -s
gives 76mB/s on iperf and 30mB/s on sda, for a total of 106mB/s on the bus
Code:
dd if=/dev/hda of=/dev/null bs=1M&
dd if=/dev/sda of=/dev/null bs=1M&
iperf -s
gives 57mB/s in iperf, 13mB/s on hda and 27mB/s on sda, for a total of 97mB/s on the bus. (Performance is the same when reading from a file rather than raw disk access.)
So the bottleneck is not on the bus. What I found interesting is that the disk performance goes down by half under heavy network usage. CPU during the combined iperf/dd runs is 70sys/0id/30wa. dd uses 45% cpu. So that's the bottleneck. Maybe I should first look at lowering cpu usage for disk access (is it possible? dma and acpi are already on.)
But during an nfs operation, the cpu is 25sys/10id/65wa and I only get 18mB/s. Why is wait so high if neither device is at full speed and the cpu is not maxed?
|
A thought:
Are there any symlinks on your system drive involved in getting to the drive that is the NFS export? I ask because the symlink is evaluated every time the disk tries to read/write (IIRC), which would mean that your slow disk has to be hit every time you try to read the fast disk.
Regardless, changing out that *slow* system drive will only help things. For gits and shiggles, maybe load up Knoppix or Ubuntu Live & do the same nfs export to see if removing the system drive from the equation helps out.