Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
We have a Linux box which acts a a file server. Currently, files and directories are exported using NFS.
At the moment, we are a bit concern on its data transfer performance.
FYI, we are using a embedded Gigabit Ethernet port on the file server.
We ran a few simple write tests between NFS client (also utilizes GigE port) and the NFS server. In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable. Unfortunately, the write/transfer speed results are not as per our expectation. It scores roughly about 11-12MByte/s, where as theoretically Gigabit Ethernet transfer rate is able to reach up to approximately 120MByte/s.
I wouldn't expect to reach the theoretical max transfer rate (it would be great if we can , but I would appreciate if you guys can share with us in terms of the following :
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
Setting to jumbo frames might help, but I think it is more a work around than a real fix for your problem.
Quote:
Originally Posted by ikmalf
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
Use XFS or JFS as your file system instead of EXT3 or EXT4. If you are using only one hard drive, that will not be adequate to hit the 120 MB per second goal. A mechanical hard drive have a maximum and minimum bandwidth, but you need to use the minimum throughput because this limit is at the worst conditions. If a hard drive has a minimum bandwidth is 25 megabytes per second, you will need at least four hard drives to reach 100 megabytes per second. You will need at least five hard drives to be OK with 120 megabytes per second. Since there is overhead, you need more hard drives to compensate for it. About eight hard drives is needed. This is just setting the drives in RAID-0. Setting up RAID-5 or RAID-6 will cost two times. After you did all of this, yes you can then optimize the kernel and NFS to make the software work efficiently as much as possible.
Quote:
Originally Posted by ikmalf
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
In your case this is not the problem. Any computer having too many add-ons installed will always have its performance be penalized.
Quote:
Originally Posted by ikmalf
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
Sure better cable could be used. Since CAT-5e is rated for 1 Gb speeds, do not need yet to worry about this. At this point it is like using the most expensive audio cable to get the best sound quality when there are other components in the system that produce the sound. In your case it is more of the components of your setup.
Quote:
Originally Posted by ikmalf
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
I have not used network bonding. It should provide more throughput, but at a cost of more latency. Bonding does require the same amount of NIC on each system. It is best to use one NIC to simplified the setup and during troubleshooting. It is best to use multiple servers to spread out the load than just one big one.
For any network, limit the amount of connections. If you want to let 25 users access the server, you will need think that each user will get around 5 megabytes per second from a 1 gigabit NIC. Though do not let a mixture of 100 megabit NIC on your network because everything on the network will slow down to that speed just to be compatible. Sure you can fix the network to 1 Gb speeds, but it will cost more time to setup each computer on the network.
That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.
That is the sort of speed I am getting on my 100Mb/s network. Are you actually connecting at 1Gb/s, or is the network only working at 100Mb/s. I get about 28MB/s on USB2.
I'm pretty sure 'physically' it is a GigE network since :
1) both client and server connects directly with to each other during the tests without any intermediary (e.g switch) in between.
2) both port on each ends is a GigE port.
The Cat 5E cable was our initial suspect since we managed to get a 70-80MByte/s data transfer rate on another similar setup which uses using Cat 6 cables.
But since Electro mentions Cat 5E is rated for 1Gbps, I guess we have to look further into the OS/filesystem configs.
1) What's the practical max data transfer rate which you guys managed to observe in a normal Gigabit based connection? What about jumbo frames configuration?
There are many factors that come into play here. Usually the network will not be bottleneck. If you are doing large sequential I/O transfers then you can max out the gigabit link. But normal multiple access NFS servers will be mostly random IO and disk bound even on super fast 15K SAS RAID10 arrays. Depends on your usage pattern.
Quote:
2) Is there any additional tuning/configuration we need to do within the OS to reach those practical max data transfer rate figure?
Adjust the tcp window and queue sizes.
For nfs make sure rsize and wsize are 32K. Try udp vs tcp. Try version 4 instead of version 3. Mount clients as ro unless they really need to write.
Quote:
3) Does PCI-e / system bus plays a role in achieving this speed? For example, we are using the embedded GigE port and we heard some people says embedded ports are actually sharing the system bus and resources with other devices, which might adds into performance issue. Correct me if I'm wrong.
It is possible. You can try the following to remove the NFS part of the equation.
That will copy ~1.5G at nearly as fast as the network will transfer.
Quote:
4) Does converting to Cat6 cabling will guarantee an increase in the data transfer performance?
No. Only if you are having rx or tx errors.
check the output of netstat -s and ifconfig for errors retransmits etc. Are you currently seeing any packet loss?
Quote:
5) In the future (once we are clear on how much single GigE transfer rate we can go) , we are looking into doing bonding since that the NFS server's shared directory/volume read-write speed is way much higher (i.e 400-600MByte/s). Will bonding allow us to achieve higher NFS read/write speed? What are the bonding modes best used for this purposes? Appreciate if anybody who has experience in doing bonding for NFS can share their experience.
If the usage pattern is being limited by the network then yes, bonding would help. for which mode I'd have to say to experiment with the balance modes.
Make sure both interfaces are running at 1000/full. Use ethtool.
Maybe look into NFS alternatives. gluster is good for parallel scaling for high tranfer rates, but does not do well for small files and concurrent file access. Maybe run iscsi with OCFS.
"What's the practical max data"
Depends on the slowest component of the transfer. If pci-e then most likely hard drive transfer. Pci or pci-x fall well below gig speeds. Back-plane embedded is same a add on card for the most part if you know how it is attached to the back-plane that would help too.
I'd test your lan cable with a good tester too. That may be the issue. I have purchase some from stores that were bad.
In a perfect situation computer to computer the max may only be 70% of gig speed.
That will copy ~1.5G at nearly as fast as the network will transfer.
Make sure both interfaces are running at 1000/full. Use ethtool.
Just some updates on the initial issue :
1) Running tests using netcat/nc and ethtool does help! Ethtool shows that the current server runs on 100Mbit/s based eventhough it supports 1000Mb/s. This itself cuts the file transfer rate down to 10-11Mbyte/s. Sigh.
2) Interestingly, this 100Mbit/s issue only happen on one of the network interface (e.g eth3) on the server. On other interfaces within the same server, it runs smoothly with 1000Mbit/s.
Tried to use ethtool (on both client and server end) "ethtool -s <iface> speed 1000" but it is still the same :-(
Any ideas?
ethtool output (server) :
# ethtool -s eth3 speed 1000
# ethtool eth3
Settings for eth3:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
ethtool output (client) :
# ethtool -s eth1 speed 1000
# ethtool eth1
Settings for eth1:
Supported ports: [ MII ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: no
In these tests, both NFS server and client are both connected directly to each other with a Cat5E cable.
Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.
Last edited by nonamenobody; 04-07-2010 at 06:14 AM.
Is the cable a cross-over cable or are you relying on auto MDI/X? If you are using a crossover cable, make sure that all 4 pairs are crossed as Gigabit ethernet requires the use of all 4 pairs. If you are using auto MDI/X, try using a suitable crossover cable.
We are using Auto MDI/X.
Quote:
Originally Posted by jefro
I have had some bad cables. Not a bad idea to swap a known good one.
Recheck the switch or whatever it is connected too. Using a known good port would be helpful.
Look at the drivers being used also.
Turn on checksumming or at least compare. See cpu loads on tests to see if cpu intensive on one only it may need to be offloaded to nic.
Can we assume nics are all similar model and version level?
OK, we will to swap the cables (and ports) and see what the results would be. Yes, all NICS are similar. Based on 'lspci', I believe it's a built-in quad port Broadcom which comes with Dell R610.
Any other ideas/comments appreciate :-)
Will update you guys on the outcomes once we managed to get this swapping / testing done.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.