LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 06-14-2006, 09:16 PM   #1
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Rep: Reputation: 51
Who is getting more than 300Mb/sec over GigE?


Is anyone getting over 300Mb/sec over gigE going from one computer to another through a switch?

If so, what's the setup?

I've never managed more than about 300Mb/sec & seem to average about 200Mb/sec. I'm running gigE on lots of computers with a Linksys managed, gigE switch in the middle. Everything is going over copper. Even if only one computer is pulling from one other computer, I'm still topping out at about 300. Can it go faster, or is that just about the limit?
 
Old 06-15-2006, 05:38 AM   #2
wslyhbb
Member
 
Registered: Apr 2002
Location: Chicago, IL
Distribution: Mandriva 2009.0 PowerPack x86_64
Posts: 150

Rep: Reputation: 15
My guess, hard drive cannot write any faster.
 
Old 06-15-2006, 07:45 AM   #3
zidane_tribal
Member
 
Registered: Apr 2005
Location: chained to my console.
Distribution: LFS 6.1
Posts: 143

Rep: Reputation: 18
Quote:
Originally Posted by wslyhbb
My guess, hard drive cannot write any faster.
indeed, i have to agree, the exact maximum sustainable read/write speeds escape me, but i do recall seeing other people with almost exactly the same problem, i.e. all the bandwidth in the world and supporting hardware that just wasnt able to utilise it.

in theory, you could create a ramdisk and throw a big file into it, or spool from /dev/urandom on one machine into /dev/null on another. that would remove the hard-drive from the transfer, if you wanted to see how high you could go.
 
Old 06-15-2006, 12:45 PM   #4
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
/dev/urandom can't keep up, because it's CPU intensive for large amounts of data. Use /dev/zero. In addition to the hard drive, the system bus can be a bottleneck. Remember that data has to cross the system bus twice (HD to CPU, CPU to Ethernet). If you want to sustain high bandwidth, you need a system designed for it.
 
Old 06-15-2006, 01:05 PM   #5
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
hmm.. when i transfer a file more than once, it's usually cached on the all tries after the first. I can veriy that by seeing that the HD access light never comes on on the sending side.

The first try, I usually get just over 20MB/s... every time after is usually 30MB/s (because there's no HD access on my workstation side).

On my server (receiving) side, I've got a pretty beefy RAID that I've tested with dd to write at about 70MB/sec, so I don't *think* that write speed is the bottleneck. Furthermore, both the NIC and the RAID are on 64bit PCI and operate at 133MHz (PCIx) & there's two EM64T Xeons processing all the data. No other slow PCI are cards on the bus - just those two.

Could the sending side be the bottlebeck?

Last edited by BrianK; 06-15-2006 at 01:09 PM.
 
Old 06-15-2006, 01:15 PM   #6
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Quote:
Could the sending side be the bottlebeck?
Of course.

Also, you may want to look at expanding your TCP buffers, which will help as latency goes up:

echo 2500000 > /proc/sys/net/core/wmem_max
echo 2500000 > /proc/sys/net/core/rmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

You don't mention if you are using jumbo frames (an oversize MTU, usually 9000). That can also significantly improve throughput, but all equipment in the path must support it.
 
Old 06-15-2006, 02:10 PM   #7
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Quote:
Originally Posted by macemoneta
Of course.

Also, you may want to look at expanding your TCP buffers, which will help as latency goes up:

echo 2500000 > /proc/sys/net/core/wmem_max
echo 2500000 > /proc/sys/net/core/rmem_max
echo "4096 5000000 5000000" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 65536 5000000" > /proc/sys/net/ipv4/tcp_wmem

You don't mention if you are using jumbo frames (an oversize MTU, usually 9000). That can also significantly improve throughput, but all equipment in the path must support it.
I am not using jumbo frames & was wondering if that would help. I don't know how to enable jumbo frames in Linux, but I'm sure the internet will tell me how.

I'll try expanding the buffers. didn't know about that.

Thanks!
 
Old 06-15-2006, 09:21 PM   #8
fedora4002
Member
 
Registered: Mar 2004
Posts: 135

Rep: Reputation: 15
Is there any benchmark data for GiGE?
 
Old 06-15-2006, 10:46 PM   #9
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Quote:
Originally Posted by fedora4002
Is there any benchmark data for GiGE?
GigE is a physical interface, so bencharks are not particularly meaningful (it would be like benchmarking a serial interface, or a printer port). The speed of the interface is 1000Mb/s. The speed you get will depend on the machine the interface is installed into, whether or not the interface is an integral part of the chipset, the bus, the driver, the TCP stack and tuning parameters, the source/destination of the I/O and the layer 3 protocol being used.

Last edited by macemoneta; 06-15-2006 at 10:48 PM.
 
Old 06-15-2006, 11:21 PM   #10
gregeb
LQ Newbie
 
Registered: Dec 2005
Posts: 1

Rep: Reputation: 0
what about a TOE card

hi Briank,

I assume you've done the dd w/ /dev/null to test out the system
what about an ISCSI toe card?

I always throw hardware at the problem! Qlogic and Atto and Adaptec
all make them. go with 'Q' - not a stock holder

greg

Quote:
Originally Posted by BrianK
Is anyone getting over 300Mb/sec over gigE going from one computer to another through a switch?

If so, what's the setup?

I've never managed more than about 300Mb/sec & seem to average about 200Mb/sec. I'm running gigE on lots of computers with a Linksys managed, gigE switch in the middle. Everything is going over copper. Even if only one computer is pulling from one other computer, I'm still topping out at about 300. Can it go faster, or is that just about the limit?
 
Old 06-16-2006, 03:18 AM   #11
djtm
LQ Newbie
 
Registered: Jun 2006
Posts: 4

Rep: Reputation: 0
I'm really curious about the units here. GigE is Gigabit ethernet, right? So that is 1000 MBit/s. So the maximum possible transfer rate would be 125MByte/s. (So I guess you're only talking about MBit) That is more than hard disks can handle for sure. So you would probably need a PCI Express ethernet card and a ramdisk at first to reach the limit of your setup. The connection link quality could also be an issue, if you e.g. have a very long cable between the computers. Then come in optimizations of the protocol. There I think UDP would be faster than TCP, as less packets need to be transfered(no ACKs). But you might want to use special network performance tools, they don't use any protocol and thus have less overhead.
Good luck and post your results
 
Old 06-16-2006, 10:39 AM   #12
mhcox
Member
 
Registered: Aug 2005
Location: Albuquerque, NM
Distribution: Fedora
Posts: 30

Rep: Reputation: 15
Quote:
Originally Posted by macemoneta
GigE is a physical interface, so bencharks are not particularly meaningful (it would be like benchmarking a serial interface, or a printer port). The speed of the interface is 1000Mb/s. The speed you get will depend on the machine the interface is installed into, whether or not the interface is an integral part of the chipset, the bus, the driver, the TCP stack and tuning parameters, the source/destination of the I/O and the layer 3 protocol being used.
The 1000Mb/s is the absolute maximum physical transfer rate, not counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so you'll never get to the physical maximum.
 
Old 06-16-2006, 11:06 AM   #13
macemoneta
Senior Member
 
Registered: Jan 2005
Location: Manalapan, NJ
Distribution: Fedora x86 and x86_64, Debian PPC and ARM, Android
Posts: 4,593
Blog Entries: 2

Rep: Reputation: 344Reputation: 344Reputation: 344Reputation: 344
Quote:
Originally Posted by mhcox
The 1000Mb/s is the absolute maximum physical transfer rate, not counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so you'll never get to the physical maximum.
Actually, The 1000Mb/s is the absolute maximum physical transfer rate, counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so your layer 3 data rate will never get to the physical maximum.

The physical layer (layer 1) doesn't care what bits you send - it doesn't differentiate a framing bit from a data bit. It's a 1000Mb/s interface; how you arrange the bits is really immaterial. This is why jumbo frames are used on high speed interfaces - you arrange more data bits relative to framing bits, and your layer 3 throughput goes up. The bit rate at the interface doesn't change.
 
Old 06-16-2006, 04:40 PM   #14
mhcox
Member
 
Registered: Aug 2005
Location: Albuquerque, NM
Distribution: Fedora
Posts: 30

Rep: Reputation: 15
Quote:
Originally Posted by macemoneta
Actually, The 1000Mb/s is the absolute maximum physical transfer rate, counting all the bits used up for TCP/IP and Ethernet frames. All the framing adds a lot of overhead, so your layer 3 data rate will never get to the physical maximum.

The physical layer (layer 1) doesn't care what bits you send - it doesn't differentiate a framing bit from a data bit. It's a 1000Mb/s interface; how you arrange the bits is really immaterial. This is why jumbo frames are used on high speed interfaces - you arrange more data bits relative to framing bits, and your layer 3 throughput goes up. The bit rate at the interface doesn't change.
Oops! Yes, what you said . I think the 200-300Mb/s BrianK is getting is the layer 4 (transport) data rate. You have Ethernet frames that contain IP packets that contain TCP packets like a set of nested Russian dolls. Each packet encapsulation uses up bits for checksums, frame/packet ids, addresses, etc. that eats up some of that 1Gb/s total bandwidth.

I don't know if that would account for all of the missing bandwidth. Other components such as motherboard chipsets could be playing a factor.

See these wikipedia links for more info:

http://en.wikipedia.org/wiki/Etherne...al_description
http://en.wikipedia.org/wiki/Interne...del_comparison
http://en.wikipedia.org/wiki/Internet_Protocol This one in particular has a nice diagram explaining the nesting structure (although for UDP not TCP).
 
Old 06-16-2006, 08:00 PM   #15
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
THanks for the responses, guys.

re: /dev/null - Well, I was pulling from /dev/zero on the sender's side for the dd test - "time dd if=/dev/zero of=/nfsmount/testfile bs=4k count=1M"

djtm:
I tried to differentiate megabits from megabytes by using Mb vs MB respectively. I know it's not neccessarily 10:1, but it's about that.
The ethernet card is on a 133MHz PCI Express slot - it's actually a 4 port Intel card & I have an 802.3ad trunk configured across the 4 of them. The RAID card is on another 133MHz PCIx slot. Both cards run at 133 MHz. There are a couple 100MHz slots on the mobo, but I'm not using them.
Only one cable in the setup is longer than 20 ft. Most of them (90% at least) are less than 10 ft. All of them are cat 5e.
I do have four switches (and a wireless access point and a router), though I've been running my tests between computers on the same switch - a Linksys SRW2024.

Still haven't setup ram drive.
Still haven't setup jumbo frames.
reading those wiki pages now. I really slacked off in my networking class back in college - I've forgotten most of what I learned about OSI layers now that I need it (OSI - that's the layer model, right? )

Thanks for the suggestions. If anything else comes up, please post it. I've been woondering about this stuff since I first moved to gigabit ethernet (when I paid $2400 for a crappy netgear switch).
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Mandrake 10 Internet very slow (<1kb/sec) while windows got 50k/sec SafeTechs Mandriva 13 09-01-2006 04:07 PM
Fedora 4: Asus A7n8x Can't get Marvell GigE to light up lsgko Linux - Networking 1 08-24-2005 06:14 AM
hdparm 64MB in 19.68 sec=3.25 MB/sec illtbagu Linux - General 11 06-26-2003 07:03 PM
GigE with some of the cheaper cards mastahnke Linux - Networking 0 04-16-2003 07:57 PM
which distr do I use for a 300MB Laptop tuyojrc Linux - Distributions 6 04-24-2002 05:29 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 01:02 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration