Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
07-25-2013, 04:23 PM
|
#1
|
LQ Newbie
Registered: Jul 2013
Posts: 2
Rep: 
|
Fedora 18 bonded interface LACP mode not aggregating link throughput
Hello,
I've bonded two 1Gbit ports on two servers together in bonded mode 4, but cannot exceed 1Gbit/s transfer speeds. The Juniper ex2200 switch is configured for LACP.
I'm trying to test glusterfs performance over aggregated links and this is a bit of a stumbling block...
Switch interfaces, note speed is 2Gbps:
Code:
root@EX2200-01> show interfaces ae2
Physical interface: ae2, Enabled, Physical link is Up
Interface index: 130, SNMP ifIndex: 571
Description: LAG_6-7
Link-level type: Ethernet, MTU: 1514, Speed: 2Gbps, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled,
Minimum links needed: 1, Minimum bandwidth needed: 0
Device flags : Present Running
Interface flags: SNMP-Traps Internal: 0x4000
Current address: a8:d0:e5:b7:6b:05, Hardware address: a8:d0:e5:b7:6b:05
Last flapped : 2013-07-24 09:50:42 EDT (00:10:45 ago)
Input rate : 2048 bps (2 pps)
Output rate : 2048 bps (2 pps)
Logical interface ae2.0 (Index 76) (SNMP ifIndex 544)
Flags: SNMP-Traps 0x40004000 Encapsulation: ENET2
Statistics Packets pps Bytes bps
Bundle:
Input : 36 0 16056 0
Output: 6 0 276 0
Protocol eth-switch
Flags: Trunk-Mode
{master:0}
root@EX2200-01> show interfaces ae3
Physical interface: ae3, Enabled, Physical link is Up
Interface index: 156, SNMP ifIndex: 545
Description: LAG_8-9
Link-level type: Ethernet, MTU: 1514, Speed: 2Gbps, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled,
Minimum links needed: 1, Minimum bandwidth needed: 0
Device flags : Present Running
Interface flags: SNMP-Traps Internal: 0x4000
Current address: a8:d0:e5:b7:6b:06, Hardware address: a8:d0:e5:b7:6b:06
Last flapped : 2013-07-21 16:09:46 EDT (2d 17:51 ago)
Input rate : 2560 bps (3 pps)
Output rate : 2560 bps (3 pps)
Logical interface ae3.0 (Index 66) (SNMP ifIndex 560)
Flags: SNMP-Traps 0x40004000 Encapsulation: ENET2
Statistics Packets pps Bytes bps
Bundle:
Input : 35 0 14434 0
Output: 4 0 184 0
Protocol eth-switch
Flags: Trunk-Mode
The two servers have the same network configuration:
Code:
[root@ovirt002 ~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet6 fe80::225:90ff:fe78:5f4 prefixlen 64 scopeid 0x20<link>
ether 00:25:90:78:05:f4 txqueuelen 0 (Ethernet)
RX packets 3851071 bytes 4649202525 (4.3 GiB)
RX errors 0 dropped 17010 overruns 0 frame 0
TX packets 1023469 bytes 101350938 (96.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
br-bond0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.6.24 netmask 255.255.255.0 broadcast 10.0.6.255
inet6 fe80::225:90ff:fe78:5f4 prefixlen 64 scopeid 0x20<link>
ether 00:25:90:78:05:f4 txqueuelen 0 (Ethernet)
RX packets 609318 bytes 4378166987 (4.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 551837 bytes 42868942 (40.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 428131 bytes 22717269 (21.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 428131 bytes 22717269 (21.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
p255p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:78:05:f4 txqueuelen 1000 (Ethernet)
RX packets 386017 bytes 91765659 (87.5 MiB)
RX errors 0 dropped 5 overruns 0 frame 0
TX packets 775537 bytes 70991171 (67.7 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xdf920000-df940000
p255p2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 00:25:90:78:05:f4 txqueuelen 1000 (Ethernet)
RX packets 3465068 bytes 4557437790 (4.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 247948 bytes 30363031 (28.9 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xdf900000-df920000
Code:
[root@ovirt002 ~]# cat /etc/modprobe.d/bonding.conf
alias bond0 bonding
options bond0 miimon=100 mode=4 lacp_rate=1
[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
BRIDGE=br-bond0
[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-bond0
DEVICE=br-bond0
ONBOOT=yes
TYPE=Bridge
DELAY=0
IPADDR=10.0.6.24
NETMASK=255.255.255.0
GATEWAY=10.0.6.1
BOOTPROTO=static
NM_CONTROLLED=no
STP=no
[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p255p1
DEVICE=p255p1
ONBOOT=yes
HWADDR=00:25:90:78:05:f4
MTU=1500
NM_CONTROLLED=no
STP=no
MASTER=bond0
SLAVE=yes
[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p255p2
HWADDR=00:25:90:78:05:F5
NAME=p255p2
UUID=5ac0dfca-de67-441d-80cd-02fee5706edb
ONBOOT=yes
STP=no
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
lspci shows the NIC as:
Code:
02:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
I'm wondering if maybe the network cards aren't fully supported by Linux bonding..
|
|
|
07-25-2013, 09:33 PM
|
#2
|
Senior Member
Registered: Jan 2012
Distribution: Slackware
Posts: 3,348
Rep: 
|
I'm afraid this is probably not a bug or a limitation specific to the Linuc bonding driver.
The question is how packets are being distributed across the individual links in the team. Most switches can be configured to select a link based on a hash of the destination MAC or IPv4 address in the frame. This means that traffic to the same address will always be sent over the same sublink, effectively limiting the bandwidth to that of a single team member.
Some equipment can include layer 4 information in the hash, such as TCP/UDP port numbers. This helps, but any individual TCP or UDP session will still be limited to one sublink. This is actually intentional, as it avoids re-ordering of frames (see the Wikipedia article on Link Aggregation for more information).
This is not much of an issue if the server is communicating with lots of different clients through a switch, or if the link is part of a network backbone between switches. In your case, however, the LAPC link is set up between two servers, and the source and destination MAC addresses will always be the same. If none of the servers are routing IP traffic or have multiple IP addresses, even the IP addresses at each end will always be the same.
Unless the Linux bonding driver supports the inclusion of Layer 4 information in the sublink selection algorithm or can be configured to use simple round-robin load balancing across sublinks, an LAPC team between two servers won't increase the total bandwidth significantly or even at all.
|
|
|
07-28-2013, 01:37 PM
|
#3
|
LQ Newbie
Registered: Jul 2013
Posts: 2
Original Poster
Rep: 
|
Thanks for providing clarity there. I re-read the standard and it makes sense now.
Are you aware of any other method of increasing point to point throughput other than upgrading to a faster interface? 10G is pretty expensive and not really an option for testing purposes.
|
|
|
07-28-2013, 01:49 PM
|
#4
|
Senior Member
Registered: Jan 2012
Distribution: Slackware
Posts: 3,348
Rep: 
|
According to the kernel bonding driver documentation, the bonding driver actually does support non-LACP, round-robin/sequential packet ordering (the parameter is "mode=0" or "mode=balance-rr").
I'm not aware of any switches that support this mode, but if you're going to connect two Linux servers directly, it should work.
|
|
|
All times are GMT -5. The time now is 02:58 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|