LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (http://www.linuxquestions.org/questions/linux-networking-3/)
-   -   Transfer Rate For Gigabit Ethernet in Linux (http://www.linuxquestions.org/questions/linux-networking-3/transfer-rate-for-gigabit-ethernet-in-linux-799369/)

ikmalf 04-01-2010 08:43 AM

Transfer Rate For Gigabit Ethernet in Linux
 
Hi,

We have a Linux box which acts a a file server. Currently, files and directories are exported using NFS.

At the moment, we are a bit concern on its data transfer performance.

FYI, we are using a embedded Gigabit Ethernet port on the file server.
We ran a few simple write tests between NFS client (also utilizes GigE port) and the NFS server. In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable. Unfortunately, the write/transfer speed results are not as per our expectation. It scores roughly about 11-12MByte/s, where as theoretically Gigabit Ethernet transfer rate is able to reach up to approximately 120MByte/s.


I wouldn't expect to reach the theoretical max transfer rate (it would be great if we can :), but I would appreciate if you guys can share with us in terms of the following :


1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?

2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?

3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.

4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?

5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.


Thanks!

Electro 04-01-2010 05:41 PM

Quote:

Originally Posted by ikmalf (Post 3920403)
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?

Setting to jumbo frames might help, but I think it is more a work around than a real fix for your problem.

Quote:

Originally Posted by ikmalf (Post 3920403)
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?

Use XFS or JFS as your file system instead of EXT3 or EXT4. If you are using only one hard drive, that will not be adequate to hit the 120 MB per second goal. A mechanical hard drive have a maximum and minimum bandwidth, but you need to use the minimum throughput because this limit is at the worst conditions. If a hard drive has a minimum bandwidth is 25 megabytes per second, you will need at least four hard drives to reach 100 megabytes per second. You will need at least five hard drives to be OK with 120 megabytes per second. Since there is overhead, you need more hard drives to compensate for it. About eight hard drives is needed. This is just setting the drives in RAID-0. Setting up RAID-5 or RAID-6 will cost two times. After you did all of this, yes you can then optimize the kernel and NFS to make the software work efficiently as much as possible.

Quote:

Originally Posted by ikmalf (Post 3920403)
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.

In your case this is not the problem. Any computer having too many add-ons installed will always have its performance be penalized.

Quote:

Originally Posted by ikmalf (Post 3920403)
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?

Sure better cable could be used. Since CAT-5e is rated for 1 Gb speeds, do not need yet to worry about this. At this point it is like using the most expensive audio cable to get the best sound quality when there are other components in the system that produce the sound. In your case it is more of the components of your setup.

Quote:

Originally Posted by ikmalf (Post 3920403)
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.

I have not used network bonding. It should provide more throughput, but at a cost of more latency. Bonding does require the same amount of NIC on each system. It is best to use one NIC to simplified the setup and during troubleshooting. It is best to use multiple servers to spread out the load than just one big one.

For any network, limit the amount of connections. If you want to let 25 users access the server, you will need think that each user will get around 5 megabytes per second from a 1 gigabit NIC. Though do not let a mixture of 100 megabit NIC on your network because everything on the network will slow down to that speed just to be compatible. Sure you can fix the network to 1 Gb speeds, but it will cost more time to setup each computer on the network.

smbell100 04-01-2010 06:05 PM

Hi

That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.

jefro 04-01-2010 07:36 PM

Can you offload checksum's on both ends and switch to half duplex and try it again.

ikmalf 04-01-2010 11:53 PM

Quote:

Originally Posted by smbell100 (Post 3920969)
Hi

That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.

I'm pretty sure 'physically' it is a GigE network since :

1) both client and server connects directly with to each other during the tests without any intermediary (e.g switch) in between.

2) both port on each ends is a GigE port.

The Cat 5E cable was our initial suspect since we managed to get a 70-80MByte/s data transfer rate on another similar setup which uses using Cat 6 cables.

But since Electro mentions Cat 5E is rated for 1Gbps, I guess we have to look further into the OS/filesystem configs.

ikmalf 04-02-2010 12:04 AM

Jefro,

Quote:

Originally Posted by jefro (Post 3921054)
Can you offload checksum's on both ends and switch to half duplex and try it again.

Apologize for my ignorance.
How do we do that?
Based on a few Google searches, will this do or I'm missing other necessary steps for half duplex?

e.g: if eth0 is our network interface

# ethtool -K eth0 tx off

Source : http://www.meshwalk.com/?p=34

mweed 04-02-2010 12:55 AM

Quote:

1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
There are many factors that come into play here. Usually the network will not be bottleneck. If you are doing large sequential I/O transfers then you can max out the gigabit link. But normal multiple access NFS servers will be mostly random IO and disk bound even on super fast 15K SAS RAID10 arrays. Depends on your usage pattern.

Quote:

2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
Adjust the tcp window and queue sizes.
For nfs make sure rsize and wsize are 32K. Try udp vs tcp. Try version 4 instead of version 3. Mount clients as ro unless they really need to write.

Quote:

3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
It is possible. You can try the following to remove the NFS part of the equation.

On on side of the link run:
Code:

nc -l 1234 > /dev/null
On the other side run:
Code:

dd if=/dev/zero bs=1460 count=$((1024 * 1024)) | nc -v $ip_of_listening_host
That will copy ~1.5G at nearly as fast as the network will transfer.

Quote:

4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
No. Only if you are having rx or tx errors.

check the output of netstat -s and ifconfig for errors retransmits etc. Are you currently seeing any packet loss?

Quote:

5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
If the usage pattern is being limited by the network then yes, bonding would help. for which mode I'd have to say to experiment with the balance modes.


Make sure both interfaces are running at 1000/full. Use ethtool.

Maybe look into NFS alternatives. gluster is good for parallel scaling for high tranfer rates, but does not do well for small files and concurrent file access. Maybe run iscsi with OCFS.

jefro 04-02-2010 03:12 PM

# apt-get install ethtool net-tools or your distro's software management. http://www.cyberciti.biz/faq/linux-c...ethernet-card/

"What's the practical max data"
Depends on the slowest component of the transfer. If pci-e then most likely hard drive transfer. Pci or pci-x fall well below gig speeds. Back-plane embedded is same a add on card for the most part if you know how it is attached to the back-plane that would help too.


I'd test your lan cable with a good tester too. That may be the issue. I have purchase some from stores that were bad.

In a perfect situation computer to computer the max may only be 70% of gig speed.

ikmalf 04-07-2010 04:53 AM

Quote:

Originally Posted by mweed (Post 3921270)

On on side of the link run:
Code:

nc -l 1234 > /dev/null
On the other side run:
Code:

dd if=/dev/zero bs=1460 count=$((1024 * 1024)) | nc -v $ip_of_listening_host
That will copy ~1.5G at nearly as fast as the network will transfer.



Make sure both interfaces are running at 1000/full. Use ethtool.

Just some updates on the initial issue :

1) Running tests using netcat/nc and ethtool does help! Ethtool shows that the current server runs on 100Mbit/s based eventhough it supports 1000Mb/s. This itself cuts the file transfer rate down to 10-11Mbyte/s. Sigh.

2) Interestingly, this 100Mbit/s issue only happen on one of the network interface (e.g eth3) on the server. On other interfaces within the same server, it runs smoothly with 1000Mbit/s.


Tried to use ethtool (on both client and server end) "ethtool -s <iface> speed 1000" but it is still the same :-(


Any ideas?


ethtool output (server) :
# ethtool -s eth3 speed 1000
# ethtool eth3
Settings for eth3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes




ethtool output (client) :

# ethtool -s eth1 speed 1000
# ethtool eth1

Settings for eth1:
Supported ports: [ MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: no

nonamenobody 04-07-2010 05:56 AM

Quote:

Originally Posted by ikmalf (Post 3920403)
In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable.

Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.

jefro 04-07-2010 04:26 PM

I have had some bad cables. Not a bad idea to swap a known good one.

Recheck the switch or whatever it is connected too. Using a known good port would be helpful.

Look at the drivers being used also.

Turn on checksumming or at least compare. See cpu loads on tests to see if cpu intensive on one only it may need to be offloaded to nic.

Can we assume nics are all similar model and version level?

ikmalf 04-08-2010 09:48 PM

Quote:

Originally Posted by nonamenobody (Post 3927271)
Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.

We are using Auto MDI/X.

Quote:

Originally Posted by jefro (Post 3927867)
I have had some bad cables. Not a bad idea to swap a known good one.

Recheck the switch or whatever it is connected too. Using a known good port would be helpful.

Look at the drivers being used also.

Turn on checksumming or at least compare. See cpu loads on tests to see if cpu intensive on one only it may need to be offloaded to nic.

Can we assume nics are all similar model and version level?

OK, we will to swap the cables (and ports) and see what the results would be. Yes, all NICS are similar. Based on 'lspci', I believe it's a built-in quad port Broadcom which comes with Dell R610.


Any other ideas/comments appreciate :-)
Will update you guys on the outcomes once we managed to get this swapping / testing done.

mweed 04-08-2010 10:59 PM

just a guess, but won't hurt to try while waiting to get the cables swapped. run:
ethtool -s eth1 speed 1000 duplex full autoneg on


All times are GMT -5. The time now is 06:43 AM.