Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
SDN 101: An Introduction to Software Defined Networking
Discover the advantages of SDN.
SDN has quickly become one of the hottest trends in IT. But not all SDN solutions offer real software-defined functionality. As more enterprises consider SDN, they want to know, “What is SDN? And what are the real benefits?” If you're ready to explore the advantages of SDN, and want to know how it should be implemented within your enterprise, start by reading our introductory white paper.
Click Here to receive this Complete Guide absolutely free.
Ok I have a server that has a dual gigabit uplink to its switch via 2 gigabit cards, both cards are intel pro using the intel e1000e driver.
Initially I used bonding-alb as reccomended by the switch owner but it was only bonding outgoing traffic from the server and all inbound traffic was only on the primary network card (in this case eth2).
Tried to research on it and all I could find was it is simply supposed to work on the cards I use (but it doesnt is transmit bonding only same as bonding-tlb mode).
So if anyone knows of some configuration that I may need to do to allow the mac masking feature to work on bonding-alb it be appreciated thanks.
Now on to the other bonding types.
I then played with bonding-rr, typical round robin. This allowed good overall speeds but since single thread transfer speeds are important to me this mode was a no go as single threaded speeds were significantly affected probably due to the packet ordering issues, I played with the tcp reordering sysctl setting it to 127 but there was no significant improvement and actually seemed to make things worse.
Right now I have settled on bonding-xor. In layer 2 mode the balancing was very poor, layer 2+3 seems much better and I see balanced traffic without single threaded speeds been compromised, and right now I am trialing layer 3+4 (the one people have reccomended) but this seems to have dropped overall performance from layer 2+3 possibly due to reordering which I read can be possible in layer 3+4 in some scenarios. The only issue I seem to come across with layer 2+3 is overall speeds seem lower than balance-rr but still better than a non bonded environment.
Balance-alb offers the best single threaded speeds on a consistent basis so fixing the lack of inbound bonding on it seems to be my goal, any thoughts on what else I have wrote is greatly appreciated tho.
Finally is it a good idea to adjust txqueuelen on the bond0 interface? it is set to 0 by default.