I am puzzled with the following. I have installed Linux in two identical servers (AMD Opteron on ASUS MB, intel 82574 Lan-on-board, Scientific Linux 5.5 (RH rebuild)), configured them with KVM hypervisor, two bridges - one for eth0 physical, the other is libvirt default bridge with NAT function. iptables rules are only those defined by libvirt. SELinux is disabled. Network drivers are the latest from Intel.
I observe that my networking latency is subjectively high. Worth, it's different from one host to another. On one server ping to 127.0.0.1 gives 0.052 ms, on the other .032 ms. Both values are rather high, since on another, Intel-based server in my organization I see observe 0.006 ms or less.
Ping from one server to the other on the same switch gives some 0.240ms and is not stable, i.e goes up and down by 0.030ms, while other servers give me .080ms is pretty stable
Also, pinging a guest VM from one host gives 0.250, on the other .500 ms - again factor n th2 and this totally ruins VM's network performance. Windows guests can only get 500Mbit host-to-guest throughput.
I wonder what can be the source of such high latency on the hosts? Any debugging tools experts couls suggests? Any tricks to improve latency? Could this be related to AMD CPU/chipset support? Something to do with linux bridge? What can cause almost 2-fold difference between hosts?
Here is my addition to sysctl.conf file on servers. Nothing helps.
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.optmem_max = 524278
net.core.netdev_max_backlog = 250000
net.ipv4.tcp_rmem = 4096 262144 16777216
net.ipv4.tcp_wmem = 4096 262144 16777216
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.route.flush = 1
thanks a lot!