Unable to ping/ssh/connect channel bonding from different VLAN
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Unable to ping/ssh/connect channel bonding from different VLAN
Hi there LQ peeps,
I just setup a new LAMP server (CentOS 5.5 x86_64) box with channel bonding on NetXtreme II BCM5709 Gigabit Ethernet (IBM x3650 M3). The problem is I wasn't able to connect to this server when I'm in different VLAN's. This server also unable to ping different VLAN's. But everything works fine when I transact in the same VLAN.
OK, there is no complete picture here at all. You've provided a vlan id, yet there is no mention at all about vlans in the rest of the config. what's that about? vlans appear to be utterly irrelevant here, and it's just a bonding issue. Also bonding itself looks irrelevant if you've got a mode 1 working fine. So basic troubleshooting to reach a different subnet says 1) can you ping the default gateway and does it show up in your arp cache. 2) what does your routing table actually say? (route -n) also it may be worth showing us a full ifconfig output as well as "cat /proc/net/bonding/bond0" something on the bonding is going to be wrong (unless you pasted the wrong info) as both eth files say they are eth0...
Hi Chris,
Sorry for the incomplete info. Anyways here are some additional info.
[root@webrwm02 ~]# ping 10.238.9.1
PING 10.238.9.1 (10.238.9.1) 56(84) bytes of data.
64 bytes from 10.238.9.1: icmp_seq=1 ttl=255 time=2.18 ms
64 bytes from 10.238.9.1: icmp_seq=2 ttl=255 time=1.62 ms
64 bytes from 10.238.9.1: icmp_seq=3 ttl=255 time=5.31 ms
--- 10.238.9.1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 1.628/3.043/5.313/1.621 ms
[root@webrwm02 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.238.9.0 0.0.0.0 255.255.255.0 U 0 0 0 bond0
10.238.9.0 0.0.0.0 255.255.255.0 U 1 0 0 eth1
10.238.9.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0
169.254.95.0 0.0.0.0 255.255.255.0 U 0 0 0 usb0
0.0.0.0 10.238.9.1 0.0.0.0 UG 0 0 0 eth0
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth0
MII Status: up
Link Failure Count: 0
Permanent HW addr: e4:1f:13:b6:31:c0
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: e4:1f:13:b6:31:c2
Also tried mode=5 and 6 but still no luck. Thanks.
OK, so where is the ping that's not working? have you verified that eth0 configure without a bond works in the same way? As you're using mode one, bond0 should just behave exactly the same as eth0, as eth1 is going to be doing absolutely nothing.
Hmm, your default gateway is set on eth0, not bond0, change that and you should be OK. I'm *pretty* sure that the route will always bo on the bonded interface, not the active nic, doesn't make sense any other way. Your config files read ok though, which seems odd.
yes, it is defined there, so is a little odd to me. If you remove the droute and add it manually to bond0 - "route del default gw 10.238.9.1" and then add it "route add default gw 10.238.9.1 dev bond0" it should add it against the bond0 interface, which makes sense from the bonding side of things.
Hi Chris,
Yes it definitely works without bonding but since its a web server it is required to configured the bonding option to provide load balancing and redundancy. Please advice. Thank you.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.