LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 02-26-2014, 12:20 PM   #1
yenn
Member
 
Registered: Jan 2011
Location: Czech Republic
Distribution: Slackware, Gentoo, FreeBSD
Posts: 176

Rep: Reputation: 28
Linux bridge performance/throughput


I suddenly run into network throughput problems on our servers. Servers acts as KVM hypervisors and uses bridges to create several local networks. Each bridge is connected to VLAN (inter-server network) which in turn is connected to two bounded NICs.

Network diagram is on picture (forum software somehow breaks ASCII images between CODE tags).

Bridges throughput (measured with iperf) fluctuates from 300 to 40 Mbits/s on hypervisor with highest load and from 600 to 400 Mbits/s on hypervisor with almost no load at all (2 idle virtual servers). It has negative impact on every application that use database (on separated virtual server). At first I thought that there was something wrong going on bond0 as everything basically depends on physical NICs. However it doesn't matter if bridge is connected to VLAN or disconnected. STP is enabled.

I guess there must be some theoretical maximum throughput based on cpu usage network traffic, etc. but given that hypervisors has 24 CPU and virtual servers aren't under significant load most of the time, I can't figure out what went wrong.

Few info about hypervisor:
Gentoo Linux with custom compiled kernel 3.7.5-hardened (I can provide kernel config if needed)
bridge-utils 1.4

Is there anything I can do to debug it more?
Attached Thumbnails
Click image for larger version

Name:	network-diagram.png
Views:	1447
Size:	14.2 KB
ID:	14838  

Last edited by yenn; 02-27-2014 at 10:10 AM.
 
Old 02-27-2014, 10:34 AM   #2
nikmit
Member
 
Registered: May 2011
Location: Nottingham, UK
Distribution: Debian
Posts: 178

Rep: Reputation: 34
Looking at the diagram you don't need STP - try turning it off. I know we never use it on our XEN servers.
 
Old 03-04-2014, 01:12 PM   #3
yenn
Member
 
Registered: Jan 2011
Location: Czech Republic
Distribution: Slackware, Gentoo, FreeBSD
Posts: 176

Original Poster
Rep: Reputation: 28
Thanks for suggestion, but it didn't change anything. I'll try newer kernel and report back.
 
Old 03-14-2014, 04:51 PM   #4
Lantzvillian
Member
 
Registered: Oct 2007
Location: BC, Canada
Distribution: Fedora, Debian
Posts: 210

Rep: Reputation: 41
If you are using ethernet bridging in linux and using bridge netfilter (aka firewalling on the bridge), expect EPIC LOSS of throughput. For example, if you are using 10/100 Mbit, expect MAX 30 MBit on 1400 byte frames. Even less on 128byte frames unless you have spent alot of time with in-driver/kernel performance optimizations.

PS. Testing network interface coalescence value tuning will help alot with the amount of interrupts and keeping the system responsive. However, the Netfilter bridge code is... it just needs alot of work
 
Old 03-26-2014, 10:00 AM   #5
yenn
Member
 
Registered: Jan 2011
Location: Czech Republic
Distribution: Slackware, Gentoo, FreeBSD
Posts: 176

Original Poster
Rep: Reputation: 28
Quote:
Originally Posted by Lantzvillian View Post
If you are using ethernet bridging in linux and using bridge netfilter (aka firewalling on the bridge), expect EPIC LOSS of throughput. For example, if you are using 10/100 Mbit, expect MAX 30 MBit on 1400 byte frames. Even less on 128byte frames unless you have spent alot of time with in-driver/kernel performance optimizations.
Do you mean ebtables? I use packet filtering with ipables with rules like:

Code:
iptables -A INPUT -i br0 [...]
By the way, bridge performance slightly increased with newer kernel (3.9.9-hardened) along with more stable throughput values, but it's still much less than I would expected.

Last edited by yenn; 03-28-2014 at 04:52 PM. Reason: typo
 
Old 03-27-2014, 02:04 PM   #6
Lantzvillian
Member
 
Registered: Oct 2007
Location: BC, Canada
Distribution: Fedora, Debian
Posts: 210

Rep: Reputation: 41
Yes, ebtables is one part of bad performance. I assume you have all debug out of the kernel, branch prediction set, SLUG vs. Slab allocation set as well - if you haven't: don't expect a serious performance change.

One thing also is interrupt handling, we have been doing some profiling, but its not looking good for us. NIC drivers are also a large part of this (at least in Linux, can't say for your VM). Did you tweak ethtool settings etc...?
 
Old 03-28-2014, 04:52 PM   #7
yenn
Member
 
Registered: Jan 2011
Location: Czech Republic
Distribution: Slackware, Gentoo, FreeBSD
Posts: 176

Original Poster
Rep: Reputation: 28
Quote:
Originally Posted by Lantzvillian View Post
Yes, ebtables is one part of bad performance. I assume you have all debug out of the kernel, branch prediction set, SLUG vs. Slab allocation set as well - if you haven't: don't expect a serious performance change.
If I use iptables rules on bridges, does that mean I'm using ebtables indirectly? I really can't tell right now.

Quote:
Originally Posted by Lantzvillian View Post
One thing also is interrupt handling, we have been doing some profiling, but its not looking good for us. NIC drivers are also a large part of this (at least in Linux, can't say for your VM). Did you tweak ethtool settings etc...?
Well, actually I believe that main problem might be in linux kernel bonding, because many people told me that linux bonding really sucks. And as far as I know, it can create really weird problems, like almost unsable connection to certain HP disk array via iSCSI.

I haven't tweak NIC settings with ethtool or debug it extensively, because I don't have much experience with debugging network in Linux and only thing I can think of is tweaking TCP in linux kernel, which strikes me as last resort. Right now I'm trying open vSwitch instead (see http://www.linuxquestions.org/questi...an-4175499820/) as someone recommended it to me. Even for bonding.

Last edited by yenn; 03-28-2014 at 04:54 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Intel Ivy Bridge Linux Performance Stepped Up In 2013 LXer Syndicated Linux News 0 01-05-2014 02:30 PM
LXer: Intel SNA Performance Of Sandy Bridge, Ivy Bridge, Haswell LXer Syndicated Linux News 0 10-14-2013 12:02 AM
performance impact of linux bridge Meson Linux - Networking 3 11-23-2012 12:31 PM
LXer: Intel Ivy Bridge Linux Virtualization Performance LXer Syndicated Linux News 0 06-01-2012 11:00 AM
LXer: Intel Ivy Bridge Linux Virtualization Performance LXer Syndicated Linux News 0 05-31-2012 04:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 08:26 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration