LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 06-17-2006, 11:55 AM   #16
SteveK1979
Member
 
Registered: Feb 2004
Location: UK
Distribution: RHEL, Ubuntu, Solaris 11, NetBSD, OpenBSD
Posts: 225

Rep: Reputation: 43

Hi BrianK,

It's interesting that you're saying you top out at around 300Mb/s on your gigabit network. I attended a Network Instruments training day last week, and was surprised to be told that a standard 1000Mb ethernet adapter's transfer rate will normally top out at around 250Mb/s. That would seem to tie up with what you're seeing.

I'm not really up with the specs of PCI-X at all, but from what I was told, a 66MHz 64-bit card is required to achieve 1000Mb/s wire speeds. However, it's worth noting that we were looking at this from the perspective of network traffic capture.

Cheers,
Steve
 
Old 06-17-2006, 12:03 PM   #17
scheidel21
Senior Member
 
Registered: Feb 2003
Location: CT
Distribution: Debian 6+, CentOS 5+
Posts: 1,323

Rep: Reputation: 100Reputation: 100
the seven layer OSI is the ideal in reality we deal with the TCP/IP model which is 5 layers which roughly map to the OSI mosel. But in reality your problems are a combination of all of the afore mentioned, you have overhead and the more you do the more overhead you have IE if you ran your network with IPSEC enable like is common in windows wired netwroks with AD now, you eat up more bandwidth than just the amount used by normal TCP/IP communications. Additionally people are right if you are using a TCP protocol the error correction eats up more than UDP but at the same time helps prevent garbled files. Then there is the hardware factor it is likely the buses that data has to traverse in each computer not the cards that cause a slow down, additionally the HD can be a bottleneck too though it seems unlikely in your case, but remember if other things are accessing the HD at the same time then that eats into your HD performance as well. Let us know if those changess help though, I would be interested to know as I graduated with a networking degree.

Alex
 
Old 06-19-2006, 01:07 PM   #18
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
Hi,
I am not really good at this,

How about plugging 2 NIC into a single LINUX box?

I also read from mag that there's a new thech found it's called Vibrating HDD wich much more faster than conventional Spinning HDD.

Regards
 
Old 06-19-2006, 03:54 PM   #19
scheidel21
Senior Member
 
Registered: Feb 2003
Location: CT
Distribution: Debian 6+, CentOS 5+
Posts: 1,323

Rep: Reputation: 100Reputation: 100
2 NICS in a box is used to increase bandwidth, but in a fast ethernet network, it also provides redundancy so that if a NIC fails there is still connectivity. In this gentlemans situation he is trying to get maximized bandwidth in a Gbit ethernet situation topping out at 300Mbits/s which of course is still faster than the theoretical 200Mbits/s in full duplex mode fast ethernet (100Mbit/s aka 100BaseT.

Alex
 
Old 06-19-2006, 08:47 PM   #20
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
So inspite of all the bottleneck, if he achieve 300mbits/s he'll get .6gbt/sec in multi tasking(2-NIC).

About the HDD; I think the use of SCSI drive is more suitable for this environtment.

Boby Lee
 
Old 06-19-2006, 09:33 PM   #21
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Quote:
Originally Posted by Bobymc
So inspite of all the bottleneck, if he achieve 300mbits/s he'll get .6gbt/sec in multi tasking(2-NIC).

About the HDD; I think the use of SCSI drive is more suitable for this environtment.

Boby Lee
I mentioned earlier in this thread that I'm using a 4 port NIC with an 802.3ad trunk (that's the protocol for max throughput) - so I'm already using 4 ports, i.e. 4 interfaces, 4 plugs, 4 cables, 4 ports on the switch (that are also 802.3ad'd) etc. etc. etc.

Multiple NICs don't help unless you have them on both sides of the transfer. Unfortunately, I don't have multi-port NICs on my other computers. Regardless, that would be a little pointless - 50 computers using hundreds of switch ports - assuming the swtiches could support that many trunks (mine only supports 4 trunks of n ports).

SCSI drives may be better suited for this, yes... that said, the load average on the server, even under heavy drive access, stays pretty low - I've never seen it go above 1 on this new server, my last server was running in the 20's under heavy load... and it was no slouch - 2.8GHz P4 with 1GB RAM. That said, I get disk writes at around 70MB/sec with the new server, (tested with dd, not hdparm) so we're not running into disk access issues yet. (I mentioned earlier in the thread that I'm using a fast 3ware RAID controller that has actual hardware "acceleration" unlike the low-cost promise cards that are so commonly found).

It looks like we're running into a packet size problem... the standard mtu of 1500 is not large enough to transfer 1000Mb/sec. I plan on testing with an mtu of 9000 (jumbo frames), but the server is currently serving & will be for at least the next 20 hours, so I'm stuck until then.

I'll be sure to post results when I'm done.

Last edited by BrianK; 06-19-2006 at 09:39 PM.
 
Old 06-19-2006, 11:31 PM   #22
Crito
Senior Member
 
Registered: Nov 2003
Location: Knoxville, TN
Distribution: Kubuntu 9.04
Posts: 1,168

Rep: Reputation: 53
Just out of curiosity, might be intersting to run a cat6 crossover cable between the two PC your transfering to/from and re-run the same test. Any difference in throughput should be due to switching latency. Just to eliminate one possibility.

EDIT: LOL, was reading first page of thread when I replied... hate it when that happens... time for bed I suppose...

Last edited by Crito; 06-19-2006 at 11:42 PM.
 
Old 06-20-2006, 07:39 AM   #23
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
About the CAT6 cable; have a shield on each pair not like Cat5(no shield at all), I imagine this calbe would eliminate cross-talk.

Just out of this pandemoniun;
Last time I found that my P4/3Ghz runs faster(during boot or maybe UP running) with 512mb only(though I got 1gb installed)

D-Core CPU(AMD or Intel) is quite cheap nowadays,a 1600mhz FSB mobo also(taiwan clone), this would be running dandy on your spec.[just in case for future upgrade]

Last edited by Bobymc; 06-21-2006 at 02:11 PM.
 
Old 06-21-2006, 05:26 AM   #24
intel_ro
Member
 
Registered: Jun 2006
Location: Romania
Distribution: RH 9, FD 2,3,4,5 Debian
Posts: 37

Rep: Reputation: 15
yes i get the preformance over 300Mbit

FTP- - Clients
| |
FTP- - Clinets
| |
FTP- Switch <-1gibit-> ROUTER <-1gibit-> Switch - Clients
| |
FTP- - Clients
| |
FTP- - Clients

the ftp are 100Mbit and the clients 100Mbit interfaces but wen all cliets copiing from fth i get over 450Mbit the limitation was because the router was a linux machine with 900Mhz CPU and i have ued pci not even pci 64

the teoretichaly trughput of pci slot is 32bit @ 33Mhz
that means 32*33000000/1024/1024 = 107Mbytes

Last edited by intel_ro; 06-21-2006 at 01:08 PM.
 
Old 06-21-2006, 02:00 PM   #25
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
Lightbulb

I imagine what BrianK want is .3g peak.[based on Alex post]

((look before you buy; cache memory on Processor and drives - find the biggest size))

Greetz

Last edited by Bobymc; 06-21-2006 at 02:29 PM.
 
Old 06-26-2006, 01:42 AM   #26
lazlow
Senior Member
 
Registered: Jan 2006
Posts: 4,363

Rep: Reputation: 172Reputation: 172
BrianK

This is a little old but it will give you someplace to start with jumbo frames:www@cs@uni@edu/~gray/gig-over-copper/gig-over-copper@html
Replace @ with . The system is mad at me.

Scheidel21

Concerning post #19 are you saying that you cannot bond GigE mode 6 for maximum bidirectional through put? I had assumed that there would be no problem with this (excluding hd/bus speeds).

Lazlow
 
Old 06-26-2006, 07:24 AM   #27
scheidel21
Senior Member
 
Registered: Feb 2003
Location: CT
Distribution: Debian 6+, CentOS 5+
Posts: 1,323

Rep: Reputation: 100Reputation: 100
Not trying to say that at all Lazlow. let's say he has his 4 port GEthernet card and that each port is capable of handling the gigabit throughput that card is connected with all four ports to a 10/100/1000 switch. His other computers connect to the switch at 100BaseT theoretically each gig port on his serveer could handle 10 machine requests (MAX of 40 machines at once though because of overhead it will be fewer machines than that) because the clients are only pushing requests and talking at 100Mbits/s additionally the four ports provide redundancy, unless the whole card fails, if only one port goes then the other three still service requests. Tow NICS can accomplish the same thing providing increased bandwidth because you have two net connections and providing redundancy should one fail.

BrianK:

This is theortical but as you have a four port card each one is 1000Mbit/s port and it's not divided up right? (Not as familiar with GEthernet as Fast Ethernet graduated recently but havent had anyt hands on experience with it.) I'm just speculating if the card is spiltting the bandwidth between the ports that could help explain the max throughput. Because the client is only talking to the server on on port not all four simultaneously.
 
Old 06-26-2006, 05:24 PM   #28
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
hmm...

finally got a minute of free time on the server.

upping the mtu to 9000 on both sides & enabling jumbo frames on the swtich did nothing to increase transfer speeds. I may have more time to play with it later (increasing TCP buffer size & so forth), but now, it's time for other work.

scheidel21 - I don't follow what you're saying. I can tell you this - if you have one port or 100 ports, transfer speeds from one machine to another will only go as fast as the slowest machine. In my case, the slowest machine is my workstation with 1 1000Mb port... which, without any extra settings, will only do about 250-300Mb/sec. Now, when 50 machines pull from the server, the server likely sees throughput of around 300Mb/sec * 4 as it stands now, but I would like it to see throughput of closer to 1000Mb/sec * 4. To achieve that, it appears that I need high mtu and larger tx & rx buffers on both sides.
 
Old 06-26-2006, 07:19 PM   #29
intel_ro
Member
 
Registered: Jun 2006
Location: Romania
Distribution: RH 9, FD 2,3,4,5 Debian
Posts: 37

Rep: Reputation: 15
don't tell me if the gigabit ethernet is not working well the wat u can tell me about cooper at 10Gbit connection ! if a machine can't handle the 10 machine can handle .. ore use more than 2 hdd in raid !
 
Old 06-29-2006, 01:33 AM   #30
Bobymc
Member
 
Registered: Apr 2006
Location: INDONESIA
Distribution: SLAX,Damn S.L,Suse,Mandrake,Rd HAT62,72,73,90, Mandriva2k6, FEdora, SUNmicrosys.
Posts: 269

Rep: Reputation: 30
With 1000Mb/sec * 4 peak, don't you think you need Dual-Core processor/1600 Front Side Bus Mobo/15000rpm SCSI media?

Last edited by Bobymc; 06-29-2006 at 01:36 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Mandrake 10 Internet very slow (<1kb/sec) while windows got 50k/sec SafeTechs Mandriva 13 09-01-2006 04:07 PM
Fedora 4: Asus A7n8x Can't get Marvell GigE to light up lsgko Linux - Networking 1 08-24-2005 06:14 AM
hdparm 64MB in 19.68 sec=3.25 MB/sec illtbagu Linux - General 11 06-26-2003 07:03 PM
GigE with some of the cheaper cards mastahnke Linux - Networking 0 04-16-2003 07:57 PM
which distr do I use for a 300MB Laptop tuyojrc Linux - Distributions 6 04-24-2002 05:29 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 05:11 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration