Hello all,
This is my first post on this board, so my apologies if it should have been posted in the more distro specific forums.
I am trying to bond two network ports in a LACP configuration on Centos 6. The same ports have been bonded on Centos 5 without issues, but now I just cannot get it to work. LACP is configured on the switch.
Basically, I am unable to ping any IP on the network except the server's own IP. But, turning off bonding, I can access the network just fine from any of the two bonded ports.
Here are my configuration files:
/etc/sysconfig/network-scripts/ifcfg-bond0:
Code:
DEVICE=bond0
IPADDR=80.x.x.205
NETMASK=255.255.255.0
GATEWAY=80.x.x.1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="miimon=100 mode=4 lacp_rate=1 xmit_hash_policy=2"
/etc/sysconfig/network-scripts/ifcfg-em1:
Code:
DEVICE="em1"
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
/etc/sysconfig/network-scripts/ifcfg-em2:
Code:
DEVICE="em2"
ONBOOT=yes
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=no
So, I checked the routing by "ip route", and it seems to be okay:
Code:
80.x.x.0/24 dev bond0 proto kernel scope link src 80.x.x.205
169.254.0.0/16 dev bond0 scope link metric 1013
default via 80.x.x.1 dev bond0
'cat /proc/net/bonding/bond0' returns the following:
Code:
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
Aggregator ID: 20
Number of ports: 2
Actor Key: 5
Partner Key: 1001
Partner Mac Address: 5c:26:0a:da:9a:5b
Slave Interface: em1
MII Status: up
Speed: 10 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:be:d9:b2:8e:71
Aggregator ID: 20
Slave queue ID: 0
Slave Interface: em2
MII Status: up
Speed: 10 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d4:be:d9:b2:8e:73
Aggregator ID: 20
Slave queue ID: 0
And 'ifconfig -a' returns the following:
Code:
bond0 Link encap:Ethernet HWaddr D4:BE:D9:B2:8E:71
inet addr:80.x.x.205 Bcast:80.x.x.255 Mask:255.255.255.0
inet6 addr: fe80::d6be:d9ff:feb2:8e71/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:396 errors:0 dropped:0 overruns:0 frame:0
TX packets:26 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:45588 (44.5 KiB) TX bytes:2860 (2.7 KiB)
em1 Link encap:Ethernet HWaddr D4:BE:D9:B2:8E:71
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:160 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:20180 (19.7 KiB) TX bytes:1456 (1.4 KiB)
em2 Link encap:Ethernet HWaddr D4:BE:D9:B2:8E:71
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:236 errors:0 dropped:0 overruns:0 frame:0
TX packets:13 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25408 (24.8 KiB) TX bytes:1404 (1.3 KiB)
I also notice that if I set up two Centos 6.3 servers identically using the above bonded configuration, those two servers can communicate with each other just fine, but cannot access any other network destinations.
I have never seen anything like it and it all seems very strange to me. NICs are Broadcom NetXtreme II 5709 and I have installed the latest drivers from the manufacturer.
Any help is greatly appreciated!