Hi all
Using RHEL 6.x on an IBM x3650 server:
Code:
Linux xx.xxx.local 2.6.32-358.18.1.el6.x86_64 #1 SMP Fri Aug 2 17:04:38 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux
We have eth0 and eth1 teamed to create bond0.
Both eth0 and eth1 are 1000000000 (1Gb/s) interfaces.
Our monitoring system queries the server over SNMP for NIC usage, but pulls back the bond0 interface max speed as 10000000 (100Mb/s), thus, it creates alerts if usage gets near 100Mb/s, which we don't want.
Some OS feedback which may be of some use (although a lot seems inconsistent):
Code:
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: e4:1f:13:bb:c5:c8
Slave queue ID: 0
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: e4:1f:13:bb:c5:ca
Slave queue ID: 0
Code:
[root@db2 log]# mii-tool bond0
bond0: 10 Mbit, half duplex, link ok
[root@db2 log]# mii-tool eth0
eth0: negotiated 100baseTx-FD, link ok
[root@db2 log]# mii-tool eth1
eth1: negotiated 100baseTx-FD, link ok
Code:
[root@db2 bond0]# ifconfig
bond0 Link encap:Ethernet HWaddr E4:1F:13:BB:C5:C8
inet addr:10.11.200.12 Bcast:10.11.200.255 Mask:255.255.255.0
inet6 addr: fe80::e61f:13ff:febb:c5c8/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:9000 Metric:1
RX packets:16619393788 errors:0 dropped:57 overruns:0 frame:0
TX packets:64443705219 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3667500474001 (3.3 TiB) TX bytes:86447458634379 (78.6 TiB)
eth0 Link encap:Ethernet HWaddr E4:1F:13:BB:C5:C8
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:16376601126 errors:0 dropped:57 overruns:0 frame:0
TX packets:64443705219 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3650885995832 (3.3 TiB) TX bytes:86447458634379 (78.6 TiB)
eth1 Link encap:Ethernet HWaddr E4:1F:13:BB:C5:C8
UP BROADCAST RUNNING SLAVE MULTICAST MTU:9000 Metric:1
RX packets:242792662 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16614478169 (15.4 GiB) TX bytes:0 (0.0 b)
I did find an article on RedHat's KB called 'Bonding slaves show link speed as 100Mbps for gigabit slave interfaces in /proc/net/bonding' but it was apparently fixed in RHEL5.
Have we misconfigured the bond? I don't know how to actually ask the OS what it thinks the bond interface is connecting at, so I am unsure whether SNMP is at fault, the OS reporting, or the bond config itself.
Can someone point me in the right direction for getting to the bottom of this?
Thank you for your time.
Elliot