Bridging network ports for better connectivity.. possible?
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Bridging network ports for better connectivity.. possible?
Hi guys,
I was thinking, is it possible to bridge 2 NIC together (both each has their public IP) such that I can 'load-balance' connections going through on the 2 NICs, and I can have some form of failover should the NIC fail.
Sounds over-done abit I think, but just wondering.
Distribution: OpenBSD 4.6, OS X 10.6.2, CentOS 4 & 5
Posts: 3,660
Rep:
That's not bridging, it's bonding. You can either have redundency (in case of failure) or capacity (aggregated throughput), but not both, SFAIK. Very, very, very few applications would need bonding for thoughput... Pretty much the only thing that could saturate your pipe would be streaming video of very high quality, probably multicast. Is your upstream connection faster than 100Mb? If not, you would never need to aggregate your bandwidth.
are there tools to see if the bonding works fine.. and data is being transferred on both channel?
are there tools to monitor it too?
so basically.. if i am using RHEL4, i just add entry for alias to modprobe.conf, and create new ifcfg-bond# , point both my physical surface MASTER will be bond#, and thats that yah?
I base this on RHEL3 documents... can't seem to find rhel4 ones
1 end is public ip; that's where the WWW is.
1 end is private ip; that's to my dB and file server.
This server has 4 ethernet ports, and I want to try to do bonding for each pair. I have successfully created bond0 for the private end,
but I can't get bond1 (for public) to work. I keep getting the following errors:
Code:
Bringing up interface bond1: bonding device bond1 does not seem to be present, delaying initialization.
There seems to be many ways of configuring, among the already very limited data i can find online, as I am using RHEL4.
You guys got any similiar situation and managed to solve it? I've read about people having problems with multiple bond interfaces. Some couldn't solve it, some did - but their solutions can't work for me.
I was curious why you're using bond1 instead of bond0. Multiple bond interfaces are only supported in RHEL4 Update 2 and later. What update of RHEL4 are you using?
Sorry for the lack of responses to your thread, only just seen it now myself. are you still after advice here?
if you're still having the issues as above, do you actually get eth0 and eth1 visible at all in an ifconfig output? in my experience (which i admit isn't huge) you'll get both those interfaces up before the bond itself comes up. is the MAC correct for eth0? do you *need* that mac in there? could be throwing a spanner in the works, but culd be needed to force a certain device to a certain interface number. if so though, i'd be looking to enforce that on all eth interfaces, not just that one.
also you've not made any mention of /proc/net/bonding/bond1, does that exist, and tell you anythign about what's there?
i'd not heard anything about the limitations on the number of bonds on rhel 4, but then have little reason to really, sounds pretty unlikely to me though, and certainly a nasty bug if so.
Sorry for the lack of responses to your thread, only just seen it now myself. are you still after advice here?
if you're still having the issues as above, do you actually get eth0 and eth1 visible at all in an ifconfig output? in my experience (which i admit isn't huge) you'll get both those interfaces up before the bond itself comes up. is the MAC correct for eth0? do you *need* that mac in there? could be throwing a spanner in the works, but culd be needed to force a certain device to a certain interface number. if so though, i'd be looking to enforce that on all eth interfaces, not just that one.
also you've not made any mention of /proc/net/bonding/bond1, does that exist, and tell you anythign about what's there?
i'd not heard anything about the limitations on the number of bonds on rhel 4, but then have little reason to really, sounds pretty unlikely to me though, and certainly a nasty bug if so.
yeah man, dying for some info from you experts here I scour the net but not much info that is relevant. A lot of people having multiple bond interface problem, but none of the solution work.
/proc/net/bonding/bond1 doesn't exist, thus my problem. It doesn't generate it . I wonder why.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.