bonding bonding-xor and bonding-alb
Ok I have a server that has a dual gigabit uplink to its switch via 2 gigabit cards, both cards are intel pro using the intel e1000e driver.
Initially I used bonding-alb as reccomended by the switch owner but it was only bonding outgoing traffic from the server and all inbound traffic was only on the primary network card (in this case eth2).
Tried to research on it and all I could find was it is simply supposed to work on the cards I use (but it doesnt is transmit bonding only same as bonding-tlb mode).
So if anyone knows of some configuration that I may need to do to allow the mac masking feature to work on bonding-alb it be appreciated thanks.
Now on to the other bonding types.
I then played with bonding-rr, typical round robin. This allowed good overall speeds but since single thread transfer speeds are important to me this mode was a no go as single threaded speeds were significantly affected probably due to the packet ordering issues, I played with the tcp reordering sysctl setting it to 127 but there was no significant improvement and actually seemed to make things worse.
Right now I have settled on bonding-xor. In layer 2 mode the balancing was very poor, layer 2+3 seems much better and I see balanced traffic without single threaded speeds been compromised, and right now I am trialing layer 3+4 (the one people have reccomended) but this seems to have dropped overall performance from layer 2+3 possibly due to reordering which I read can be possible in layer 3+4 in some scenarios. The only issue I seem to come across with layer 2+3 is overall speeds seem lower than balance-rr but still better than a non bonded environment.
Balance-alb offers the best single threaded speeds on a consistent basis so fixing the lack of inbound bonding on it seems to be my goal, any thoughts on what else I have wrote is greatly appreciated tho.
Finally is it a good idea to adjust txqueuelen on the bond0 interface? it is set to 0 by default.
|