LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices

Reply
 
Search this Thread
Old 04-18-2012, 05:30 PM   #1
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Rep: Reputation: 7
Channel bonding Issue - Bond0 interface not getting up in on RHEL6


Hello,

I am trying to set up Channel Bonding on a RHEL6 Virtual Box VM. I have 2 ethernet cards on the machine which are set up in Internal Network mode. I followed all the steps exactly as mentioned in the RHEL6 deployment Guide, but for some reason i think results are not correct. Here are my doubts:

1. When i reboot the system the Bond0 interface is up and has the IP assigned and eth1 and eth2 cards are acting as slaves, which is expected but as soon as i restart the network service the Bond0 interface does not come up.

2. Second question which i have is when the Bond0 is up and has IP assigned to it, i ping from my other VM to the IP and it responds. Now when i manually bring down both the physical interfaces eth1 and eth2, even then also i am getting ping replies. I am wondering when both the physical interfaces are down how can Bonded interface respond.

Please correct me wherever i am missing anything. Or if i am doing this wrong way please guide me in doing it correctly. I am attaching the screen shots and config files for my system.

Network Interfaces config files

Quote:

[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-bond0
DEVICE="bond0"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR="192.168.10.1"
USERCTL="no"
BONDING_OPTS="mode=1 miimon=100"

[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-eth1
DEVICE="eth1"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"

[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-eth2
DEVICE="eth2"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"
Contents of /etc/modprobe.d/bonding.conf

Quote:
[root@prod ~]# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
I have also attached the screen shots of various outputs
 
Old 04-19-2012, 11:20 AM   #2
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,758

Rep: Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643
Assuming you didn't modify the copy and paste, you have dashes(-) instead of slashes(/), marked in red:
Quote:
[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-bond0
DEVICE="bond0"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR="192.168.10.1"
USERCTL="no"
BONDING_OPTS="mode=1 miimon=100"

[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-eth1
DEVICE="eth1"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"

[root@prod ~]# cat /etc/sysconfig/network-scripts-ifcfg-eth2
DEVICE="eth2"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"
/etc/sysconfig/network-scripts is the directory that the ifcfg-ethX files should live in

I'd also ensure you put a NETMASK directive in with the IPADDR

Last edited by kbp; 04-19-2012 at 11:21 AM.
 
Old 04-19-2012, 01:33 PM   #3
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by kbp View Post
Assuming you didn't modify the copy and paste, you have dashes(-) instead of slashes(/), marked in red:


/etc/sysconfig/network-scripts is the directory that the ifcfg-ethX files should live in

I'd also ensure you put a NETMASK directive in with the IPADDR
Hi Kbp,

That was just a typo error while i posted here in actual its "/" only and not "-". yes i had tried putting netmask also but that also did not gave any success .

I have been working on this from last 2 days, i am having problem in setting up two NICs on my VM which i think is creating the problem. I have 2 NICs on my VM, ifconfig shows both the interfaces but only one gets the IP assigned to it.

Here are the configs of both the intefaces and results of ifconfig, can you please help out on this.
Attached Images
File Type: png config files.png (25.7 KB, 4 views)
File Type: png ifconfig result.png (62.5 KB, 3 views)

Last edited by Rohit_4739; 04-19-2012 at 01:49 PM.
 
Old 04-19-2012, 03:34 PM   #4
kbp
Senior Member
 
Registered: Aug 2009
Posts: 3,758

Rep: Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643Reputation: 643
Try 'grep eth /var/log/messages' to see if both devices are actually there.
 
Old 04-19-2012, 03:37 PM   #5
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by kbp View Post
Try 'grep eth /var/log/messages' to see if both devices are actually there.
Yes both the devices are there, i checked dmesg also. Both the devices are detected but the thing only one is getting configured, i placed config files for both the interfaces.
 
Old 04-23-2012, 03:29 PM   #6
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Hi Guys,

Finally i was able to resolve the issues i was facing in setting up Channel Bonding. It is working now but i am not very much sure about the results if they are correct.

I used two cars eth7 and eth8 to make a bond0 interface. The IP of the bond0 interface is 192.168.30.1. Now i try to ping this machine from my second test machine which has IP of 192.168.30.3 and it works fine. Now i try bringing the interface eth8 down using "ifdown eth8" and then ping again from test machine and it is successful, and then brought the interface eth8 again up using ifup eth8.

Now if i repeat the same for eth7 the ping stops working i.e. bringing eth7 down and keeping eth8 up and then ping from test machine i do not get ping responses even when one of the two bonded interfaces is up i.e. interface eth8 is up.

The Bonded interface bond0 is setup in round-robin mode(mode=0). I tried using the mode=1 also but got same results.

So can somebody please tell if it is working as it should work or there is some discrepancy in results.

Here are the config files

Network Interfaces config files

Quote:

[root@prod ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE="bond0"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR="192.168.30.1"
USERCTL="no"
BONDING_OPTS="mode=1 miimon=100"

[root@prod ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth7
DEVICE="eth7"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"

[root@prod ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth8
DEVICE="eth8"
ONBOOT="yes"
BOOTPROTO="none"
MASTER="bond0"
SLAVE="yes"
USERCTL="no"
Thanks
Rohit
 
Old 05-01-2012, 08:50 AM   #7
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Can anyone answer my queries made in the above post?

Hi Guys,

Can anyone help me out on the queries made above.
 
Old 05-01-2012, 09:20 AM   #8
Skaperen
Senior Member
 
Registered: May 2009
Location: WV, USA
Distribution: Slackware, CentOS, Ubuntu, Fedora, Timesys, Linux From Scratch
Posts: 1,777
Blog Entries: 20

Rep: Reputation: 115Reputation: 115
Bonding does not spread the load randomly over multiple interfaces. Instead, the load is spread based on particular aspects of the packet. The same exact packet will always go over the same interface. The same exact connection will always go over the same interface (where the design intention is to maintain correct packet ordering). What you get from bonding is that on average, a wide variety of different traffic will go over one or the other interface. When you see the ping fail when eth7 is down, that probably means that particular ping packet is hashing to eth7's bonding index. Ping something else. Or try various random port numbers with UDP packets to see what goes through. Can you watch tcpdump on both ends during testing?
 
Old 05-01-2012, 09:38 AM   #9
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by Skaperen View Post
Bonding does not spread the load randomly over multiple interfaces. Instead, the load is spread based on particular aspects of the packet. The same exact packet will always go over the same interface. The same exact connection will always go over the same interface (where the design intention is to maintain correct packet ordering). What you get from bonding is that on average, a wide variety of different traffic will go over one or the other interface. When you see the ping fail when eth7 is down, that probably means that particular ping packet is hashing to eth7's bonding index. Ping something else. Or try various random port numbers with UDP packets to see what goes through. Can you watch tcpdump on both ends during testing?
Quote:
The same exact packet will always go over the same interface
So then what would happen in case if one of the NICs fail ? I could not understand what did you mean by "Ping something else". Could you please explain in bit more detail ?
 
Old 05-01-2012, 09:49 AM   #10
Skaperen
Senior Member
 
Registered: May 2009
Location: WV, USA
Distribution: Slackware, CentOS, Ubuntu, Fedora, Timesys, Linux From Scratch
Posts: 1,777
Blog Entries: 20

Rep: Reputation: 115Reputation: 115
If an interface goes down, then at least for a time, some traffic won't go through. Bonding is not intended as a path failure fallback method (routing is for that). It is intended as a kind of load balancing so you can get something approaching 2 gigabits of bandwidth usability on a pair of 1 gigabit links. When one NIC/interface does fail, bonding may eventually figure that out and re-order its list of links. But doing that can disrupt what is happening, so if that is implemented, it's going to be delayed. Going back the other way (adding the interface back when it works again) might be even longer.

What is your goal that led you to use bonding?

"ping something else" = "ping a different address that would still be reached over this bonded link"
 
Old 05-01-2012, 10:32 AM   #11
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by Skaperen View Post
If an interface goes down, then at least for a time, some traffic won't go through. Bonding is not intended as a path failure fallback method (routing is for that). It is intended as a kind of load balancing so you can get something approaching 2 gigabits of bandwidth usability on a pair of 1 gigabit links. When one NIC/interface does fail, bonding may eventually figure that out and re-order its list of links. But doing that can disrupt what is happening, so if that is implemented, it's going to be delayed. Going back the other way (adding the interface back when it works again) might be even longer.
As per Red Hat's official document Channel Bonding can be used for Fault Tolerance using active-backup mode where transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.

Quote:
What is your goal that led you to use bonding?
I was just curios about Channel Bonding so was just testing how it works.

Quote:
"ping something else" = "ping a different address that would still be reached over this bonded link"
Well if i ping eth7 keeping eth8 down ping works but if i keep eth7 down and ping bonded interface's IP address ping does not reply back.
 
Old 05-01-2012, 12:06 PM   #12
Skaperen
Senior Member
 
Registered: May 2009
Location: WV, USA
Distribution: Slackware, CentOS, Ubuntu, Fedora, Timesys, Linux From Scratch
Posts: 1,777
Blog Entries: 20

Rep: Reputation: 115Reputation: 115
Quote:
Originally Posted by Rohit_4739 View Post
As per Red Hat's official document Channel Bonding can be used for Fault Tolerance using active-backup mode where transmissions are received and sent out via the first available bonded slave interface. Another bonded slave interface is only used if the active bonded slave interface fails.
That's something different than the original bonding. How well has it been documented? Did you configure this particular mode?

I accomplish a similar effect by just configuring two or more NICs with the same IP address. If a NIC goes dead, the next time an ARP query is made, it is only answered on the other NIC. This is, unfortunately, also slow. And this isn't perfect in cases of a NIC going half dead (gets ARP query, answers over NIC with dead transmit, and no one gets it). I'm still in thought process of how to do fail-over network configuration reliably and quickly. A smart network manager (not the one that comes with many distros now, which is pretty much brain dead in terms of actually managing anything) seems to be called for.

Quote:
Originally Posted by Rohit_4739 View Post
Well if i ping eth7 keeping eth8 down ping works but if i keep eth7 down and ping bonded interface's IP address ping does not reply back.
What do you mean by "ping eth7"? Normally you ping an IP address, not an interface (though IPv6 scope local allows you to ping to a specific interface's IP address via a specific interface). One thought I had was to tunnel IPv4 through IPv6 scope local (but that won't work for Windows).
 
Old 05-01-2012, 12:59 PM   #13
Rohit_4739
Member
 
Registered: Oct 2010
Distribution: Red Hat
Posts: 224

Original Poster
Rep: Reputation: 7
Quote:
Originally Posted by Skaperen View Post
That's something different than the original bonding. How well has it been documented? Did you configure this particular mode?
Yes it is very well documented in Red Hat's official documentation and i did configured this mode however i am not getting the expected result.
Quote:
I accomplish a similar effect by just configuring two or more NICs with the same IP address. If a NIC goes dead, the next time an ARP query is made, it is only answered on the other NIC. This is, unfortunately, also slow. And this isn't perfect in cases of a NIC going half dead (gets ARP query, answers over NIC with dead transmit, and no one gets it). I'm still in thought process of how to do fail-over network configuration reliably and quickly. A smart network manager (not the one that comes with many distros now, which is pretty much brain dead in terms of actually managing anything) seems to be called for.

What do you mean by "ping eth7"? Normally you ping an IP address, not an interface (though IPv6 scope local allows you to ping to a specific interface's IP address via a specific interface). One thought I had was to tunnel IPv4 through IPv6 scope local (but that won't work for Windows).
Yes i did ping IP address only, what i meant was that when i keep eth7 up and eth8 down and try to ping bonded interface's IP ping works but not if i keep eth7 down and eth8.


And yes i really appreciate your effort and responsiveness in taking time from your busy schedule to answer my queries. Thanks a lot, that's a different thing i am still not getting my doubts clear on Channel Bonding.
 
Old 05-01-2012, 04:13 PM   #14
Skaperen
Senior Member
 
Registered: May 2009
Location: WV, USA
Distribution: Slackware, CentOS, Ubuntu, Fedora, Timesys, Linux From Scratch
Posts: 1,777
Blog Entries: 20

Rep: Reputation: 115Reputation: 115
I'm only familiar with the type of bonding for load balancing. I am unfamiliar with the failover mode. I would use another method, anyway, so I am unlikely to be trying it. And maybe it doesn't even work, or work as expected.

Another possible way I thought about trying for failover was bridging two interfaces together and binding the local IP address(es) to the bridge itself (e.g. use the bridge name as the interface name). This WILL create a packet loop unless you enable spanning tree. And this may only work with a like Linux system or a switch with spanning tree on the other end.

Otherwise, failover routing (not easy on a LAN with a lot of hosts).
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Channel bonding in RHEL6 achoos13 Linux - Networking 12 02-09-2012 02:57 PM
bonding with bond0, but VMware Workstation does not see bond0 dimm0k Linux - Virtualization and Cloud 1 07-14-2011 09:22 PM
IPTables physdev and bond0 interface ACiD GRiM Linux - Networking 0 07-18-2010 03:29 AM
bonding and vlans. Bonding a vlan interface vs applying vlans to a bond interface JasonCzerak Linux - Networking 0 09-11-2008 09:59 AM
Uninstalling Bond0 Interface? your_shadow03 Linux - Networking 1 08-21-2008 09:16 AM


All times are GMT -5. The time now is 02:28 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration