LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 04-01-2010, 08:43 AM   #1
ikmalf
LQ Newbie
 
Registered: Feb 2010
Posts: 15

Rep: Reputation: 0
Transfer Rate For Gigabit Ethernet in Linux


Hi,

We have a Linux box which acts a a file server. Currently, files and directories are exported using NFS.

At the moment, we are a bit concern on its data transfer performance.

FYI, we are using a embedded Gigabit Ethernet port on the file server.
We ran a few simple write tests between NFS client (also utilizes GigE port) and the NFS server. In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable. Unfortunately, the write/transfer speed results are not as per our expectation. It scores roughly about 11-12MByte/s, where as theoretically Gigabit Ethernet transfer rate is able to reach up to approximately 120MByte/s.


I wouldn't expect to reach the theoretical max transfer rate (it would be great if we can , but I would appreciate if you guys can share with us in terms of the following :


1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?

2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?

3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.

4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?

5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.


Thanks!
 
Old 04-01-2010, 05:41 PM   #2
Electro
LQ Guru
 
Registered: Jan 2002
Posts: 6,042

Rep: Reputation: Disabled
Quote:
Originally Posted by ikmalf View Post
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
Setting to jumbo frames might help, but I think it is more a work around than a real fix for your problem.

Quote:
Originally Posted by ikmalf View Post
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
Use XFS or JFS as your file system instead of EXT3 or EXT4. If you are using only one hard drive, that will not be adequate to hit the 120 MB per second goal. A mechanical hard drive have a maximum and minimum bandwidth, but you need to use the minimum throughput because this limit is at the worst conditions. If a hard drive has a minimum bandwidth is 25 megabytes per second, you will need at least four hard drives to reach 100 megabytes per second. You will need at least five hard drives to be OK with 120 megabytes per second. Since there is overhead, you need more hard drives to compensate for it. About eight hard drives is needed. This is just setting the drives in RAID-0. Setting up RAID-5 or RAID-6 will cost two times. After you did all of this, yes you can then optimize the kernel and NFS to make the software work efficiently as much as possible.

Quote:
Originally Posted by ikmalf View Post
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
In your case this is not the problem. Any computer having too many add-ons installed will always have its performance be penalized.

Quote:
Originally Posted by ikmalf View Post
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
Sure better cable could be used. Since CAT-5e is rated for 1 Gb speeds, do not need yet to worry about this. At this point it is like using the most expensive audio cable to get the best sound quality when there are other components in the system that produce the sound. In your case it is more of the components of your setup.

Quote:
Originally Posted by ikmalf View Post
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
I have not used network bonding. It should provide more throughput, but at a cost of more latency. Bonding does require the same amount of NIC on each system. It is best to use one NIC to simplified the setup and during troubleshooting. It is best to use multiple servers to spread out the load than just one big one.

For any network, limit the amount of connections. If you want to let 25 users access the server, you will need think that each user will get around 5 megabytes per second from a 1 gigabit NIC. Though do not let a mixture of 100 megabit NIC on your network because everything on the network will slow down to that speed just to be compatible. Sure you can fix the network to 1 Gb speeds, but it will cost more time to setup each computer on the network.
 
Old 04-01-2010, 06:05 PM   #3
smbell100
Member
 
Registered: Sep 2007
Location: Shetland, UK
Distribution: Slackware, Mandrake, LFS
Posts: 59

Rep: Reputation: 16
Hi

That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.
 
Old 04-01-2010, 07:36 PM   #4
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
Can you offload checksum's on both ends and switch to half duplex and try it again.
 
Old 04-01-2010, 11:53 PM   #5
ikmalf
LQ Newbie
 
Registered: Feb 2010
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by smbell100 View Post
Hi

That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.
I'm pretty sure 'physically' it is a GigE network since :

1) both client and server connects directly with to each other during the tests without any intermediary (e.g switch) in between.

2) both port on each ends is a GigE port.

The Cat 5E cable was our initial suspect since we managed to get a 70-80MByte/s data transfer rate on another similar setup which uses using Cat 6 cables.

But since Electro mentions Cat 5E is rated for 1Gbps, I guess we have to look further into the OS/filesystem configs.
 
Old 04-02-2010, 12:04 AM   #6
ikmalf
LQ Newbie
 
Registered: Feb 2010
Posts: 15

Original Poster
Rep: Reputation: 0
Jefro,

Quote:
Originally Posted by jefro View Post
Can you offload checksum's on both ends and switch to half duplex and try it again.
Apologize for my ignorance.
How do we do that?
Based on a few Google searches, will this do or I'm missing other necessary steps for half duplex?

e.g: if eth0 is our network interface

# ethtool -K eth0 tx off

Source : http://www.meshwalk.com/?p=34
 
Old 04-02-2010, 12:55 AM   #7
mweed
Member
 
Registered: Mar 2006
Posts: 33

Rep: Reputation: 17
Quote:
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
There are many factors that come into play here. Usually the network will not be bottleneck. If you are doing large sequential I/O transfers then you can max out the gigabit link. But normal multiple access NFS servers will be mostly random IO and disk bound even on super fast 15K SAS RAID10 arrays. Depends on your usage pattern.

Quote:
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
Adjust the tcp window and queue sizes.
For nfs make sure rsize and wsize are 32K. Try udp vs tcp. Try version 4 instead of version 3. Mount clients as ro unless they really need to write.

Quote:
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
It is possible. You can try the following to remove the NFS part of the equation.

On on side of the link run:
Code:
nc -l 1234 > /dev/null
On the other side run:
Code:
dd if=/dev/zero bs=1460 count=$((1024 * 1024)) | nc -v $ip_of_listening_host
That will copy ~1.5G at nearly as fast as the network will transfer.

Quote:
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
No. Only if you are having rx or tx errors.

check the output of netstat -s and ifconfig for errors retransmits etc. Are you currently seeing any packet loss?

Quote:
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
If the usage pattern is being limited by the network then yes, bonding would help. for which mode I'd have to say to experiment with the balance modes.


Make sure both interfaces are running at 1000/full. Use ethtool.

Maybe look into NFS alternatives. gluster is good for parallel scaling for high tranfer rates, but does not do well for small files and concurrent file access. Maybe run iscsi with OCFS.
 
Old 04-02-2010, 03:12 PM   #8
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
# apt-get install ethtool net-tools or your distro's software management. http://www.cyberciti.biz/faq/linux-c...ethernet-card/

"What's the practical max data"
Depends on the slowest component of the transfer. If pci-e then most likely hard drive transfer. Pci or pci-x fall well below gig speeds. Back-plane embedded is same a add on card for the most part if you know how it is attached to the back-plane that would help too.


I'd test your lan cable with a good tester too. That may be the issue. I have purchase some from stores that were bad.

In a perfect situation computer to computer the max may only be 70% of gig speed.
 
Old 04-07-2010, 04:53 AM   #9
ikmalf
LQ Newbie
 
Registered: Feb 2010
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by mweed View Post

On on side of the link run:
Code:
nc -l 1234 > /dev/null
On the other side run:
Code:
dd if=/dev/zero bs=1460 count=$((1024 * 1024)) | nc -v $ip_of_listening_host
That will copy ~1.5G at nearly as fast as the network will transfer.



Make sure both interfaces are running at 1000/full. Use ethtool.
Just some updates on the initial issue :

1) Running tests using netcat/nc and ethtool does help! Ethtool shows that the current server runs on 100Mbit/s based eventhough it supports 1000Mb/s. This itself cuts the file transfer rate down to 10-11Mbyte/s. Sigh.

2) Interestingly, this 100Mbit/s issue only happen on one of the network interface (e.g eth3) on the server. On other interfaces within the same server, it runs smoothly with 1000Mbit/s.


Tried to use ethtool (on both client and server end) "ethtool -s <iface> speed 1000" but it is still the same :-(


Any ideas?


ethtool output (server) :
# ethtool -s eth3 speed 1000
# ethtool eth3
Settings for eth3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes




ethtool output (client) :

# ethtool -s eth1 speed 1000
# ethtool eth1

Settings for eth1:
Supported ports: [ MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: no
 
Old 04-07-2010, 05:56 AM   #10
nonamenobody
Member
 
Registered: Oct 2002
Posts: 138

Rep: Reputation: 22
Quote:
Originally Posted by ikmalf View Post
In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable.
Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.

Last edited by nonamenobody; 04-07-2010 at 06:14 AM.
 
Old 04-07-2010, 04:26 PM   #11
jefro
Moderator
 
Registered: Mar 2008
Posts: 22,001

Rep: Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629Reputation: 3629
I have had some bad cables. Not a bad idea to swap a known good one.

Recheck the switch or whatever it is connected too. Using a known good port would be helpful.

Look at the drivers being used also.

Turn on checksumming or at least compare. See cpu loads on tests to see if cpu intensive on one only it may need to be offloaded to nic.

Can we assume nics are all similar model and version level?
 
Old 04-08-2010, 09:48 PM   #12
ikmalf
LQ Newbie
 
Registered: Feb 2010
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by nonamenobody View Post
Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.
We are using Auto MDI/X.

Quote:
Originally Posted by jefro View Post
I have had some bad cables. Not a bad idea to swap a known good one.

Recheck the switch or whatever it is connected too. Using a known good port would be helpful.

Look at the drivers being used also.

Turn on checksumming or at least compare. See cpu loads on tests to see if cpu intensive on one only it may need to be offloaded to nic.

Can we assume nics are all similar model and version level?
OK, we will to swap the cables (and ports) and see what the results would be. Yes, all NICS are similar. Based on 'lspci', I believe it's a built-in quad port Broadcom which comes with Dell R610.


Any other ideas/comments appreciate :-)
Will update you guys on the outcomes once we managed to get this swapping / testing done.
 
Old 04-08-2010, 10:59 PM   #13
mweed
Member
 
Registered: Mar 2006
Posts: 33

Rep: Reputation: 17
just a guess, but won't hurt to try while waiting to get the cables swapped. run:
ethtool -s eth1 speed 1000 duplex full autoneg on
 
  


Reply

Tags
bonding, gigabit, network, nfs, performance



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Expected transfer rate of samba, SATA over gigabit to SATA darlingm Linux - Server 0 03-25-2008 10:35 AM
LXer: Solarflare Demonstrates Line Rate 10 Gigabit Ethernet for Virtual Machines with LXer Syndicated Linux News 0 09-06-2007 12:40 AM
ethernet slow transfer-rate pc2pc about 400kbps loboautoma Mandriva 1 01-15-2005 02:30 AM
Gigabit ethernet using Linux kanth Linux - Networking 0 02-10-2004 06:25 AM
Slow transfer rate into Linux server danedwards11 Linux - Networking 6 09-26-2003 08:45 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 10:40 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration