Fedora 18 bonded interface LACP mode not aggregating link throughput
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Fedora 18 bonded interface LACP mode not aggregating link throughput
Hello,
I've bonded two 1Gbit ports on two servers together in bonded mode 4, but cannot exceed 1Gbit/s transfer speeds. The Juniper ex2200 switch is configured for LACP.
I'm trying to test glusterfs performance over aggregated links and this is a bit of a stumbling block...
I'm afraid this is probably not a bug or a limitation specific to the Linuc bonding driver.
The question is how packets are being distributed across the individual links in the team. Most switches can be configured to select a link based on a hash of the destination MAC or IPv4 address in the frame. This means that traffic to the same address will always be sent over the same sublink, effectively limiting the bandwidth to that of a single team member.
Some equipment can include layer 4 information in the hash, such as TCP/UDP port numbers. This helps, but any individual TCP or UDP session will still be limited to one sublink. This is actually intentional, as it avoids re-ordering of frames (see the Wikipedia article on Link Aggregation for more information).
This is not much of an issue if the server is communicating with lots of different clients through a switch, or if the link is part of a network backbone between switches. In your case, however, the LAPC link is set up between two servers, and the source and destination MAC addresses will always be the same. If none of the servers are routing IP traffic or have multiple IP addresses, even the IP addresses at each end will always be the same.
Unless the Linux bonding driver supports the inclusion of Layer 4 information in the sublink selection algorithm or can be configured to use simple round-robin load balancing across sublinks, an LAPC team between two servers won't increase the total bandwidth significantly or even at all.
Thanks for providing clarity there. I re-read the standard and it makes sense now.
Are you aware of any other method of increasing point to point throughput other than upgrading to a faster interface? 10G is pretty expensive and not really an option for testing purposes.
According to the kernel bonding driver documentation, the bonding driver actually does support non-LACP, round-robin/sequential packet ordering (the parameter is "mode=0" or "mode=balance-rr").
I'm not aware of any switches that support this mode, but if you're going to connect two Linux servers directly, it should work.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.