LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 08-02-2017, 06:09 AM   #1
elliot01
Member
 
Registered: Jun 2009
Location: UK
Distribution: CentOS / RedHat
Posts: 89

Rep: Reputation: 16
Question Bonded NIC showing lower speed than it should


Hi all

Using RHEL 6.x on an IBM x3650 server:
Code:
Linux xx.xxx.local 2.6.32-358.18.1.el6.x86_64 #1 SMP Fri Aug 2 17:04:38 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux
We have eth0 and eth1 teamed to create bond0.
Both eth0 and eth1 are 1000000000 (1Gb/s) interfaces.

Our monitoring system queries the server over SNMP for NIC usage, but pulls back the bond0 interface max speed as 10000000 (100Mb/s), thus, it creates alerts if usage gets near 100Mb/s, which we don't want.

Some OS feedback which may be of some use (although a lot seems inconsistent):

Code:
cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: e4:1f:13:bb:c5:c8
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: e4:1f:13:bb:c5:ca
Slave queue ID: 0
Code:
[root@db2 log]# mii-tool bond0
bond0: 10 Mbit, half duplex, link ok
[root@db2 log]# mii-tool eth0
eth0: negotiated 100baseTx-FD, link ok
[root@db2 log]# mii-tool eth1
eth1: negotiated 100baseTx-FD, link ok
Code:
[root@db2 bond0]# ifconfig
bond0     Link encap:Ethernet  HWaddr E4:1F:13:BB:C5:C8
          inet addr:10.11.200.12  Bcast:10.11.200.255  Mask:255.255.255.0
          inet6 addr: fe80::e61f:13ff:febb:c5c8/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:9000  Metric:1
          RX packets:16619393788 errors:0 dropped:57 overruns:0 frame:0
          TX packets:64443705219 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3667500474001 (3.3 TiB)  TX bytes:86447458634379 (78.6 TiB)

eth0      Link encap:Ethernet  HWaddr E4:1F:13:BB:C5:C8
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:16376601126 errors:0 dropped:57 overruns:0 frame:0
          TX packets:64443705219 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3650885995832 (3.3 TiB)  TX bytes:86447458634379 (78.6 TiB)

eth1      Link encap:Ethernet  HWaddr E4:1F:13:BB:C5:C8
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:9000  Metric:1
          RX packets:242792662 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:16614478169 (15.4 GiB)  TX bytes:0 (0.0 b)
I did find an article on RedHat's KB called 'Bonding slaves show link speed as 100Mbps for gigabit slave interfaces in /proc/net/bonding' but it was apparently fixed in RHEL5.

Have we misconfigured the bond? I don't know how to actually ask the OS what it thinks the bond interface is connecting at, so I am unsure whether SNMP is at fault, the OS reporting, or the bond config itself.

Can someone point me in the right direction for getting to the bottom of this?

Thank you for your time.

Elliot
 
Old 08-02-2017, 06:17 AM   #2
elliot01
Member
 
Registered: Jun 2009
Location: UK
Distribution: CentOS / RedHat
Posts: 89

Original Poster
Rep: Reputation: 16
A bit of extra information.

I have queried the switch and both ports which the server is connected to are confirmed (via SNMP) to be connected at 1000000000 (1Gb/s) with a matching 9000 MTU.
 
Old 08-02-2017, 08:01 AM   #3
elliot01
Member
 
Registered: Jun 2009
Location: UK
Distribution: CentOS / RedHat
Posts: 89

Original Poster
Rep: Reputation: 16
After some further investigation, it seems that I am on a wild goose chase trying to get bond0 to report its speed, due to the nature of it being just a pointer to the slaves.

As a solution, I have just told the monitoring system to watch eth0 and eth1 instead.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Slow data transmit speed upto 55kbps on bonded NIC which is 1000Mbps capable! Kamal Luthra Linux - Networking 2 06-29-2015 02:38 AM
Upper and lower panels in gnome classic stop showing themselves on Debian Wheezy jjrojaspy Linux - Desktop 9 12-30-2014 02:09 PM
Bonded interface failover speed question parthelion Linux - Networking 1 07-10-2013 10:23 PM
CentOS 5.3 with bonded Broadcom NIC - loses connectivity intermittently hackaroo Linux - Networking 5 07-11-2012 02:02 AM
Bonded NIC showing only half-duplex struct Linux - Networking 1 05-10-2011 02:34 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 06:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration