LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 06-29-2006, 01:52 PM   #31
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51

Quote:
Originally Posted by Bobymc
With 1000Mb/sec * 4 peak, don't you think you need Dual-Core processor/1600 Front Side Bus Mobo/15000rpm SCSI media?
no, no, and no. (well.. maybe)

my setup -
64 bit PCI bus @ 133HMz
two dual-core xeons (so there appears to be 4 procs) - @ 800 MHz FSB
8 disk RAID array - 7200 rpm drives w/ 16MB disk buffers.
3ware 9550SX PCI-X RAID w/ 256MB memory - theoretically capable of 800MB/s (that's mega bytes, not mega bits) reads, 380 MB/sec writes

All that said, I don't think 1000Mb/sec is even possible - it's more like 750 Mb/sec once the packets are loaded and shipped. Regardless, it should be more than 250-300Mb/sec... Sure, maybe a SCSI array would be better, but at this point, hardware on the server is not the bottleneck.
 
Old 06-29-2006, 10:35 PM   #32
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
BrianK

I ran accross and article entitled :'Squeeze Your Gigabit NIC for Top Performance". Not too much on Jumbo Frames but useful.(?) http://www.enterprisenetworkingplane...le.php/3485486

lazlow
 
Old 06-30-2006, 12:14 PM   #33
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
Yes at this point there's no hardware problem, at the future surely it'll be affecting.

I never heard of one's getting fully GigE at 1g/sec(peak) but in 300Mbps you're right, it must be easy to achieve.

As comparison the D-core Xeon would be faster than D-core Pentium and withstand of dissipation.( I havent checked on Itanium for years)

We'll be happy to hear more from you........

Regards
 
Old 06-30-2006, 02:02 PM   #34
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Quote:
Originally Posted by lazlow
BrianK

I ran accross and article entitled :'Squeeze Your Gigabit NIC for Top Performance". Not too much on Jumbo Frames but useful.(?) http://www.enterprisenetworkingplane...le.php/3485486

lazlow
link doesn't work.
 
Old 06-30-2006, 06:35 PM   #35
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
BrianK

Oops!

http://www.enterprisenetworkingplane...le.php/3485486

lazlow
 
Old 07-04-2006, 01:29 PM   #36
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
I just wondering on how we implement this setting on WiFi.

Last edited by Bobymc; 07-07-2006 at 12:05 PM.
 
Old 01-16-2007, 05:49 PM   #37
chrisphillips
LQ Newbie
 
Registered: Jan 2007
Location: Sydney
Distribution: Debian & Ubuntu
Posts: 6

Rep: Reputation: 0
Easy to get 900+ Mbps

This thread is a little old but I am really surprised at the numbers people are discussing.

I can easily get ~600+ Mbps on a gigE PCI card (32bit 33 MHz) memory-memory.

With the onboard Nforce gigE adapters I can get 940 Mbps TCP throughput. Theoretical max is ~ 950 Mbps (if I accounted for the headers correctly). And this is between machines 1000km apart (but no routers in between).

Turning on jumbo frames (9000 bytes) I got 980 Mbps sustained on a link closer to 1500km as the crowflies (RTT 20ms.

For the long haul you need to increase the tcp window size but for normal LANs this will not make much if any difference.

disk-disk I have achieved > 800 Mbps throughput from Sydney to Perth (~3000 km) using softraid 0. This is all on ~ 2 GHz AMD (some dual CPU but that does not make much difference).

Regards
Chris
 
Old 01-16-2007, 06:16 PM   #38
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Quote:
Originally Posted by chrisphillips
I can easily get ~600+ Mbps on a gigE PCI card (32bit 33 MHz) memory-memory.

With the onboard Nforce gigE adapters I can get 940 Mbps TCP throughput. Theoretical max is ~ 950 Mbps (if I accounted for the headers correctly). And this is between machines 1000km apart (but no routers in between).

Turning on jumbo frames (9000 bytes) I got 980 Mbps sustained on a link closer to 1500km as the crowflies (RTT 20ms.
How are you testing this?
Do you have something abnormal in your configuration or did it work that way out of the box?
What distro?
What other hardware?

I'll admit I haven't tried computer-computer w/o a switch, but I've never seen anywhere near that speed. I'm curious (not doubting) how yours is so fast.
 
Old 01-17-2007, 09:04 PM   #39
chrisphillips
LQ Newbie
 
Registered: Jan 2007
Location: Sydney
Distribution: Debian & Ubuntu
Posts: 6

Rep: Reputation: 0
Quote:
Originally Posted by BrianK
How are you testing this?
Do you have something abnormal in your configuration or did it work that way out of the box?
What distro?
What other hardware?

I'll admit I haven't tried computer-computer w/o a switch, but I've never seen anywhere near that speed. I'm curious (not doubting) how yours is so fast.
Testing using iperf (2.0.2) and my own custom written C program, which does memory-memory or disk-memory-memory-disk (or any of those sub-steps). Both iperf and my code gets the same numbers.

Debian Sarge.

Tyan Thunder K8WE mobo with dual AMD64 CPU (running 32bit kernel). This motherboard has dual ONBOARD gigE NICS which helps. But I get similar numbers from a Dell 1600SC with a PCI-X intel NIC and some Nforce3 "gaming" mobos

Note that this is not computer-computer. There are lots of switches in between (as I said about ~500-1000 km separated). There are no routes or firewalls (they are on layer-2 network).

This is all "out of the box" stuff - other than TCP window of 1-2 MBytes.

Next step is to get this rate from Sydney to Amsterdam... that may be more of a challenge. :-)
 
Old 03-07-2007, 06:33 PM   #40
Snowbat
Member
 
Registered: Jun 2005
Location: q3dm7
Distribution: Mandriva 2010.0 x86_64
Posts: 338

Rep: Reputation: 31
ttcp is useful for benchmarking. I consistantly see > 800 Mbit between two identically specced boxes using onboard nForce ethernet. Sending box was around 70% idle during test, according to 'top'. Receiving box appears to be maxed out though. Standard 1500 MTU (no jumbo frames).

Sending box:
Code:
$ ttcp -ts -n 131072 192.168.1.50
ttcp-t: buflen=8192, nbuf=131072, align=16384/0, port=5001  tcp  -> 192.168.1.50
ttcp-t: socket
ttcp-t: connect
ttcp-t: 1073741824 bytes in 10.20 real seconds = 102789.47 KB/sec +++
ttcp-t: 131072 I/O calls, msec/call = 0.08, calls/sec = 12848.68
ttcp-t: 0.0user 0.8sys 0:10real 8% 0i+0d 0maxrss 0+3pf 19226+15csw
Receiving box:
Code:
$ ttcp -rs
ttcp-r: buflen=8192, nbuf=2048, align=16384/0, port=5001  tcp
ttcp-r: socket
ttcp-r: accept from 192.168.1.4
ttcp-r: 1073741824 bytes in 10.20 real seconds = 102785.12 KB/sec +++
ttcp-r: 131645 I/O calls, msec/call = 0.08, calls/sec = 12904.31
ttcp-r: 0.0user 9.8sys 0:10real 97% 0i+0d 0maxrss 0+2pf 995+172csw
Hardware:
2 x AMD 3800+ on MSI K8N NEO2 Platinum-54G, 1GB PC3200 Corsair TWINX1024-3200XL, Mandriva 2006 x86_64, forcedeth driver
1 x CNet 8 port gigabit fanless switch

Last edited by Snowbat; 03-07-2007 at 07:12 PM.
 
Old 03-09-2007, 07:51 PM   #41
Slim Backwater
Member
 
Registered: Nov 2005
Distribution: Slackware 10.2 2.6.20
Posts: 68

Rep: Reputation: 15
I think it's the quality of the hardware

I would like to add some data to this, as I'm trying to get better throughput myself.

GigE is usually faster than a single HDD so testing file transfers is pointless. I'm testing with iperf.

http://dast.nlanr.net/Projects/Iperf/

It compiles easily in Slackware, there's a package for apt-get in Ubuntu, and it's available for Windows (good to test that environment too).

In summary, I think it all comes down to the quality of the hardware, and I have some interesting numbers.

At work I have two Intel servers; one server (DIME) is based on the Intel SE7500WV2 server board with Dual Xeon 2 Ghz processors. Both have dual on-board GigE Nics. The other (Terra) is an Intel SE7520JR2 serverboard with Dual Xeon 3.6 Ghz processors. Both are running Slackware 11.0 2.6.17.13, although with a recompiled Kernel to enable SMP, SMT and High Mem. Not much else has been tuned in the kernel.

Using iperf I can get about 939-940 Mbits/sec through the tcp test (one server runs iperf -s and the other runs iperf -c, no tuning of parameters) and I think that this is awesome. They are interconnected by a 3Com 3250 switch (3CR17501-91).

That's my best case scenario. No tuning of any networking parameter, but running quality hardware.

At home I have three machines with GigE, with vastly inferior hardware and significantly lower throughput. These machines are interconnected by a 3Com SuperStack 3 Switch 3812 (3C17401).

P360 is a Dell Precision 360 (purchased used), Gateway is a Celeron 400 based on a Gigabyte GA-6BXC and Overload is a Sempron 2800 on an ASUS K8N-E Deluxe. P360 and Overload have onboard GigE, and Gateway has had an Intel Pro/1000 GT Desktop adapter installed. All are regular PCI, 32-bit, 33 Mhz.

P360 is running Ubuntu Edgy and Gateway and Overload are running Slackware 11.0. Overload is on 2.6.17.13 and Gateway is on 2.6.18 (because it has a Hauppauge PVR-150 in it)

Now the numbers:
iperf to/from Gateway (from either other machine) I can only get about 250 Mbits/sec

iperf to P360, from Overload, I can get about 820 Mbits/sec, however from P360 to Overload I only get 690 Mbits/sec.

What I have noticed, is that the CPU load on Gateway and Overload is 100% during the test, and the other machines (the two servers and P360), barely get to 20%. BTW, I get the same numbers with a cross-over, but I know that my home switch doesn't support Jumbo frames, so this is all with a plain old 1500 MTU.

What's interesting is that all that CPU is in Interrupts. Running `vmstat 1` shows thousands of interrupts per second. This, I can't yet explain.

Here's some sample vmstat data, two lines per machine (not the first and not consecutive), before and during an iperf run, while running the server (iperf -s) The iperf client doesn't generate nearly as many interrupts.

Code:
DIME: 
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0   2528 2119216  67596 894876    0    0     0     0  255  255  0  0 100  0
 0  0   2528 2118704  67596 894876    0    0     0     0 8288 16760  0 14 85  0

TERRA: (was busy, but in jumps up and id is not 0)
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 2  0  41100 217552  60516 3550432    0    0    51   583  329 11313 41  7 50  2
 4  0  41100 213844  60664 3551032    0    0    32   382 6808 19038 38 22 39  1

P360:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 0  0      0 1362548  60496 430396    0    0     0     3  350  258  3  1 97  0
 1  0      0 1361516  60496 430396    0    0     0     6 8281 16180  3 24 73  0

OVERLOAD:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 0  0   2644  14824 150488 666396    0    0     0     0  450   97  0  1 99  0
 2  0   2644  12344 150488 666396    0    0     0     0 32796  548  0 100  0  0

GATEWAY:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 1  0   2568 255308  23164  31644    0    0     0     9  254  212  0  0 100  0
 2  0   2568 255052  23224  31644    0    0     0     9 4918  617  1 99  0  0
The interrupt load on OVERLOAD is incredible, and I think GATEWAY simply can't handle it. (but that little box continues to surprise me - I'm hoping that I can find a "fix").

I must admit that this is the first thread I've read here on linuxquestions regarding GigE, and maybe this post is a bit premature (maybe the soln is in one of the other results) but I wanted to post my data in a seemingly relevant thread. I'll reply if I do find a relevant answer or solution.

._.

Last edited by Slim Backwater; 03-10-2007 at 05:34 AM.
 
Old 03-09-2007, 09:39 PM   #42
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
Lightbulb

I'm still listening.......

I think I need to say that you all should change your cable to category 6 with a good AMP connector just in case to avoid semaphore.........dont you agree?
 
Old 03-11-2007, 08:56 AM   #43
Cliffster
LQ Newbie
 
Registered: Feb 2007
Posts: 26

Rep: Reputation: 15
hmm as mentioned earlier, the max rate is a hardware bitrate but not a datastream, it neccessarily includes in it all the probably several packet wrappings and their attendant protocol's resends and crcs and whatnot.

your layer interfaces are wrapping and stripping as it passes in an out which of course is finite by hardware. if you send just one byte it gets wrapped into packets which could be wrapped in another or several headers, which is eating your bandwidth on the actual cable.

the double bus traversal of the data was also mentioned, then theres the disk write buffer and speed of the raid device. unless you increase the MTU size to epic proportion to decrease wrapper overhead you wont get anywhere near a gig on the cable, and what device barring perhaps a ramdisk can write at that speed anyway. but good luck with it.

edit: if you want to throw money at it get a couple of 2 nanosecond solid state ramdisks with a fibreoptic cable between them, that'd really fly.

ps. the large number of cpu interrupts above is, for my money, the cpu polling the bus to discover if its done doing something yet, that being transporting data.

Last edited by Cliffster; 03-11-2007 at 09:06 AM.
 
Old 04-19-2007, 12:35 PM   #44
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
Cool

Maybe this is out of the topic but I think still in the track.
I put my 3com940 GigBitEth peer to my iSCSI initiator(PCI-X 64 bit[not a PCI express]) then pull a 2 gigabit file from (sofware NAS) PC. File transfer is done in less than a minute.I compare this in 100 Mbps (Tell you what; I leave the PC and don't want to know).
My hardware is P4/3Gig and Xeon/1Gig of processor speed.
I think a full performance depending on how files were written in HDD+good interconnect+CPU&media bottleneck.
As I remembered that is between 300-400mpbs REAL MODE not peak.(I'm not sure)

Last edited by Bobymc; 04-19-2007 at 12:45 PM.
 
Old 06-02-2007, 07:43 AM   #45
chrisphillips
LQ Newbie
 
Registered: Jan 2007
Location: Sydney
Distribution: Debian & Ubuntu
Posts: 6

Rep: Reputation: 0
Quote:
Originally Posted by Bobymc
I'm still listening.......

I think I need to say that you all should change your cable to category 6 with a good AMP connector just in case to avoid semaphore.........dont you agree?
Rubbish. Cat5e can saturate a gigE connection. Using jumbo frames (9000 MTU) I can get 989 Mbps user data throughputs with TCP. The 11 Mbps is ethernet/IP/TCP headers.


Chris
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Mandrake 10 Internet very slow (<1kb/sec) while windows got 50k/sec SafeTechs Mandriva 13 09-01-2006 04:07 PM
Fedora 4: Asus A7n8x Can't get Marvell GigE to light up lsgko Linux - Networking 1 08-24-2005 06:14 AM
hdparm 64MB in 19.68 sec=3.25 MB/sec illtbagu Linux - General 11 06-26-2003 07:03 PM
GigE with some of the cheaper cards mastahnke Linux - Networking 0 04-16-2003 07:57 PM
which distr do I use for a 300MB Laptop tuyojrc Linux - Distributions 6 04-24-2002 05:29 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 03:48 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration