LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 07-25-2013, 03:23 PM   #1
stevedd
LQ Newbie
 
Registered: Jul 2013
Posts: 2

Rep: Reputation: Disabled
Fedora 18 bonded interface LACP mode not aggregating link throughput


Hello,

I've bonded two 1Gbit ports on two servers together in bonded mode 4, but cannot exceed 1Gbit/s transfer speeds. The Juniper ex2200 switch is configured for LACP.

I'm trying to test glusterfs performance over aggregated links and this is a bit of a stumbling block...

Switch interfaces, note speed is 2Gbps:

Code:
root@EX2200-01> show interfaces ae2    
Physical interface: ae2, Enabled, Physical link is Up
  Interface index: 130, SNMP ifIndex: 571
  Description: LAG_6-7
  Link-level type: Ethernet, MTU: 1514, Speed: 2Gbps, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled,
  Minimum links needed: 1, Minimum bandwidth needed: 0
  Device flags   : Present Running
  Interface flags: SNMP-Traps Internal: 0x4000
  Current address: a8:d0:e5:b7:6b:05, Hardware address: a8:d0:e5:b7:6b:05
  Last flapped   : 2013-07-24 09:50:42 EDT (00:10:45 ago)
  Input rate     : 2048 bps (2 pps)
  Output rate    : 2048 bps (2 pps)

  Logical interface ae2.0 (Index 76) (SNMP ifIndex 544)
    Flags: SNMP-Traps 0x40004000 Encapsulation: ENET2
    Statistics        Packets        pps         Bytes          bps
    Bundle:
        Input :            36          0         16056            0
        Output:             6          0           276            0
    Protocol eth-switch
      Flags: Trunk-Mode

{master:0}
root@EX2200-01> show interfaces ae3    
Physical interface: ae3, Enabled, Physical link is Up
  Interface index: 156, SNMP ifIndex: 545
  Description: LAG_8-9
  Link-level type: Ethernet, MTU: 1514, Speed: 2Gbps, BPDU Error: None, MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled, Flow control: Disabled,
  Minimum links needed: 1, Minimum bandwidth needed: 0
  Device flags   : Present Running
  Interface flags: SNMP-Traps Internal: 0x4000
  Current address: a8:d0:e5:b7:6b:06, Hardware address: a8:d0:e5:b7:6b:06
  Last flapped   : 2013-07-21 16:09:46 EDT (2d 17:51 ago)
  Input rate     : 2560 bps (3 pps)
  Output rate    : 2560 bps (3 pps)

  Logical interface ae3.0 (Index 66) (SNMP ifIndex 560)
    Flags: SNMP-Traps 0x40004000 Encapsulation: ENET2
    Statistics        Packets        pps         Bytes          bps
    Bundle:
        Input :            35          0         14434            0
        Output:             4          0           184            0
    Protocol eth-switch
      Flags: Trunk-Mode
The two servers have the same network configuration:

Code:
[root@ovirt002 ~]# ifconfig
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST>  mtu 1500
        inet6 fe80::225:90ff:fe78:5f4  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:78:05:f4  txqueuelen 0  (Ethernet)
        RX packets 3851071  bytes 4649202525 (4.3 GiB)
        RX errors 0  dropped 17010  overruns 0  frame 0
        TX packets 1023469  bytes 101350938 (96.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-bond0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.6.24  netmask 255.255.255.0  broadcast 10.0.6.255
        inet6 fe80::225:90ff:fe78:5f4  prefixlen 64  scopeid 0x20<link>
        ether 00:25:90:78:05:f4  txqueuelen 0  (Ethernet)
        RX packets 609318  bytes 4378166987 (4.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 551837  bytes 42868942 (40.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 428131  bytes 22717269 (21.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 428131  bytes 22717269 (21.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

p255p1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 00:25:90:78:05:f4  txqueuelen 1000  (Ethernet)
        RX packets 386017  bytes 91765659 (87.5 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 775537  bytes 70991171 (67.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xdf920000-df940000  

p255p2: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether 00:25:90:78:05:f4  txqueuelen 1000  (Ethernet)
        RX packets 3465068  bytes 4557437790 (4.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 247948  bytes 30363031 (28.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xdf900000-df920000
Code:
[root@ovirt002 ~]# cat /etc/modprobe.d/bonding.conf 
alias bond0 bonding
options bond0 miimon=100 mode=4 lacp_rate=1


[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0 
DEVICE=bond0
ONBOOT=yes
BOOTPROTO=static
USERCTL=no
BRIDGE=br-bond0


[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-bond0 
DEVICE=br-bond0
ONBOOT=yes
TYPE=Bridge
DELAY=0
IPADDR=10.0.6.24
NETMASK=255.255.255.0
GATEWAY=10.0.6.1
BOOTPROTO=static
NM_CONTROLLED=no
STP=no

[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p255p1
DEVICE=p255p1
ONBOOT=yes
HWADDR=00:25:90:78:05:f4
MTU=1500
NM_CONTROLLED=no
STP=no
MASTER=bond0
SLAVE=yes


[root@ovirt002 ~]# cat /etc/sysconfig/network-scripts/ifcfg-p255p2
HWADDR=00:25:90:78:05:F5
NAME=p255p2
UUID=5ac0dfca-de67-441d-80cd-02fee5706edb
ONBOOT=yes
STP=no
MASTER=bond0
SLAVE=yes
NM_CONTROLLED=no
lspci shows the NIC as:

Code:
02:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
I'm wondering if maybe the network cards aren't fully supported by Linux bonding..
 
Old 07-25-2013, 08:33 PM   #2
Ser Olmy
Senior Member
 
Registered: Jan 2012
Distribution: Slackware
Posts: 3,340

Rep: Reputation: Disabled
I'm afraid this is probably not a bug or a limitation specific to the Linuc bonding driver.

The question is how packets are being distributed across the individual links in the team. Most switches can be configured to select a link based on a hash of the destination MAC or IPv4 address in the frame. This means that traffic to the same address will always be sent over the same sublink, effectively limiting the bandwidth to that of a single team member.

Some equipment can include layer 4 information in the hash, such as TCP/UDP port numbers. This helps, but any individual TCP or UDP session will still be limited to one sublink. This is actually intentional, as it avoids re-ordering of frames (see the Wikipedia article on Link Aggregation for more information).

This is not much of an issue if the server is communicating with lots of different clients through a switch, or if the link is part of a network backbone between switches. In your case, however, the LAPC link is set up between two servers, and the source and destination MAC addresses will always be the same. If none of the servers are routing IP traffic or have multiple IP addresses, even the IP addresses at each end will always be the same.

Unless the Linux bonding driver supports the inclusion of Layer 4 information in the sublink selection algorithm or can be configured to use simple round-robin load balancing across sublinks, an LAPC team between two servers won't increase the total bandwidth significantly or even at all.
 
Old 07-28-2013, 12:37 PM   #3
stevedd
LQ Newbie
 
Registered: Jul 2013
Posts: 2

Original Poster
Rep: Reputation: Disabled
Thanks for providing clarity there. I re-read the standard and it makes sense now.

Are you aware of any other method of increasing point to point throughput other than upgrading to a faster interface? 10G is pretty expensive and not really an option for testing purposes.
 
Old 07-28-2013, 12:49 PM   #4
Ser Olmy
Senior Member
 
Registered: Jan 2012
Distribution: Slackware
Posts: 3,340

Rep: Reputation: Disabled
According to the kernel bonding driver documentation, the bonding driver actually does support non-LACP, round-robin/sequential packet ordering (the parameter is "mode=0" or "mode=balance-rr").

I'm not aware of any switches that support this mode, but if you're going to connect two Linux servers directly, it should work.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Centos6.4 - LACP (bonding mode 4) + bridge setup mr.b-o-b Linux - Virtualization and Cloud 1 06-04-2013 03:41 PM
bonded interface as default gw jabirmk Linux - Networking 1 07-02-2012 02:31 AM
Netdump over bonded interface ? shriyer Linux - Software 0 07-22-2009 08:45 AM
LXer: Bonded VPNs for Higher Throughput and Failover with Zeroshell Linux LXer Syndicated Linux News 0 07-21-2009 01:00 AM
kickstart with bonded interface bajones Linux - Newbie 6 07-29-2008 11:43 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 06:47 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration