Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
12-07-2013, 11:10 AM
|
#1
|
LQ Newbie
Registered: Aug 2011
Posts: 15
Rep:
|
Aggregator ID in bonding configuration
I have strange situation with my bonding configuration.
HP DL380 G7 box with 4 onboard NICs.
RedHat 5.10 as an OS.
All NICs are 1GB capable.
Configured all 4 NICs into LACP (policy 4) bond with some extra settings for hash algorithm.
Cisco switch has complementary port-channel configured.
BUT
Two redhat NICs have one Aggregator ID and two other NICs another Aggregator ID. As a result port-channel on the switch is not working for all 4 interfaces and total throughput is only 2GB and only two interfaces are used.
What makes RedHat choose different aggregator IDs for NICs?
Is it possible to force Aggregator ID on the NIC configuration? How?
Is it something on the switch that need to be checked?
I have two servers misbehaving this way many others are OK.
Here is some outputs that may help:
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0-2 (October 7, 2008)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 50
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Active Aggregator Info:
Aggregator ID: 3
Number of ports: 2
Actor Key: 17
Partner Key: 10
Partner Mac Address: 00:25:83:f8:c0:00
Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 9c:8e:99:fc:6e:d6
Aggregator ID: 3
Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 9c:8e:99:fc:6e:d8
Aggregator ID: 3
Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 9c:8e:99:fc:6e:da
Aggregator ID: 1
Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 9c:8e:99:fc:6e:dc
Aggregator ID: 1
# /sbin/ethtool bond0
Settings for bond0:
Supported ports: [ ]
Supported link modes:
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: 2000Mb/s
Duplex: Full
Port: Unknown! (255)
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes
# /sbin/ethtool eth0
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Link detected: yes
# /sbin/ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Link detected: yes
# /sbin/ethtool eth2
Settings for eth2:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Link detected: yes
# /sbin/ethtool eth3
Settings for eth3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: g
Link detected: yes
|
|
|
12-10-2013, 05:32 PM
|
#2
|
Senior Member
Registered: Apr 2009
Posts: 1,890
Rep:
|
How do you configure the bond interface? Your bond look like active-backup, not active-active. My guessing, if eth0 and eth1 are down, traffic will fail over to eth2 and eth3 if switch configuration is correct.
|
|
|
12-10-2013, 06:15 PM
|
#3
|
LQ Newbie
Registered: Aug 2011
Posts: 15
Original Poster
Rep:
|
No, it is LACP with some hash policy (look at cat ... bond0 command). This is why aggregation id is important. Links must have identical parameters. switch configuration is also important, but i do not know what exactly make aggregator ids different. It may be that switch does not support hash policy i used here, need to do some testing. i thought someone already know what exactly can be wrong.
Leonid
|
|
|
12-11-2013, 03:31 PM
|
#4
|
Senior Member
Registered: Apr 2009
Posts: 1,890
Rep:
|
Maybe you should check LACP configuration on Cisco switch. Their LACP has mode, active and passive.
Hash policy shouldn't cause the issue if traffic has enough session.
|
|
|
12-13-2013, 02:21 PM
|
#5
|
LQ Newbie
Registered: Aug 2011
Posts: 15
Original Poster
Rep:
|
Yep, CISCO it was.
As it turned out in all cases (3) it was mismatch in some fine tune configuration parameters between two blades or two nodes of switches. Looking at those closer together with our network engineers helped to fix them.
Now all bonds on all servers are working as they should.
Best regards
Leonid
|
|
|
All times are GMT -5. The time now is 02:36 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|