Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
When I try to transfer multiple small files from my (Ubuntu 10.10) desktop PC to my newly built Backup Server (Debian 6 Squeeze) I am getting very low transfer speeds.
It took me 48hours to do my initial backup!
I have an NFS share mounted on my desktop.
Copying a large file (e.g. 600MB ubuntu ISO) flies along at 50MB/sec (dropping to 0 every so often and then jumping back to 50mb/sec - which I presume is a hdd write speed limiting it).
When I try and copy a folder of images (~2mb each) the transfer speed is around 2MB/sec!
I have been googling for a solution and tried a number of suggested ethtool settings but nothing seems to work.
Any ideas?
ethtool(Desktop)
Code:
tom@zaphod:/$ sudo ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: pg
Wake-on: g
Current message level: 0x00000037 (55)
Link detected: yes
ethtool -a Desktop
Code:
tom@zaphod:/$ sudo ethtool -a eth1
Pause parameters for eth1:
Autonegotiate: on
RX: on
TX: on
You need to upgrade to a gigabit capable router in order to take advantage of the full gigabit speed, however if you have any other machines with legacy 10/100 fast Ethernet Network interface cards your net wok wil run at 100MB a second\d no matter if every computer has a gigablt Ethernet adapter. A great example of this auto negotiation feature is my network printer which only has a 100Megabit Ethernet adapter, thus meaning our whole wired network will run at 100megabit because that is the common data rate that all of the networked devices on my network share. In conclusion yu need the gigabit backbone infrastructure and gigabit network interface cards in every machine you will not get the full benefit of the gigabit transfer rates.
I have a router with 100mb ethernet ports that is connected to a gigabit (1000mb) switch. Both the desktop and server have gigabit network adapters and are also connected to this same gigabit switch.
In Summary:
Code:
100mb router ---- > 1000mb switch <---- desktop pc
<---- backup server
I have a router with 100mb ethernet ports that is connected to a gigabit (1000mb) switch. Both the desktop and server have gigabit network adapters and are also connected to this same gigabit switch.
In Summary:
Code:
100mb router ---- > 1000mb switch <---- desktop pc
<---- backup server
Code:
1000Gigabit...> 1000MB Switch <....Desktop PC
<....Backup server
Without a gigabit Router you will not get a "true gigbit network".
You need to upgrade to a gigabit capable router in order to take advantage of the full gigabit speed, however if you have any other machines with legacy 10/100 fast Ethernet Network interface cards your net wok wil run at 100MB a second\d no matter if every computer has a gigablt Ethernet adapter. A great example of this auto negotiation feature is my network printer which only has a 100Megabit Ethernet adapter, thus meaning our whole wired network will run at 100megabit because that is the common data rate that all of the networked devices on my network share. In conclusion yu need the gigabit backbone infrastructure and gigabit network interface cards in every machine you will not get the full benefit of the gigabit transfer rates.
This is not true. All today's switches can select transfer rate per port. If you have one 100mbs device, it won't affect other devices in the network.
When I try to transfer multiple small files from my (Ubuntu 10.10) desktop PC to my newly built Backup Server (Debian 6 Squeeze) I am getting very low transfer speeds.
It took me 48hours to do my initial backup!
I have an NFS share mounted on my desktop.
Copying a large file (e.g. 600MB ubuntu ISO) flies along at 50MB/sec (dropping to 0 every so often and then jumping back to 50mb/sec - which I presume is a hdd write speed limiting it).
When I try and copy a folder of images (~2mb each) the transfer speed is around 2MB/sec!
I have been googling for a solution and tried a number of suggested ethtool settings but nothing seems to work.
Any ideas?
ethtool(Desktop)
Code:
tom@zaphod:/$ sudo ethtool eth1
Settings for eth1:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: pg
Wake-on: g
Current message level: 0x00000037 (55)
Link detected: yes
ethtool -a Desktop
Code:
tom@zaphod:/$ sudo ethtool -a eth1
Pause parameters for eth1:
Autonegotiate: on
RX: on
TX: on
I have seen same problem with my old and new just built server. Do you have software raid on server? I have seen that software raid can cause this kind of problems. Few notices:
- SW raid5 and JFS filesystem and nfs was horrible combination in my old server. Samba worked well, with solid 30MBs transfer speed. Nfs gave me 40MBs to 0 vandering speed. When I moved partition to ext3, I got solid 30MBs with nfs too.
- With new server and much horsepower, I can't go over 40MBs with nfs and software raid 5. I now use xfs filesystem. With small files I sometimes see under 10MBs speeds. For some reason samba performs better.
- async is must in nfs options. Other options didn't give me much speed increase.
- With nfs I see much more system load than with samba.
This is not true. All today's switches can select transfer rate per port. If you have one 100mbs device, it won't affect other devices in the network.
Thank you for clarifying that. I thought that was the case.
Quote:
I have seen same problem with my old and new just built server. Do you have software raid on server? I have seen that software raid can cause this kind of problems.
I have software raid (md) raid 1 setup with 2 x 1.5TB drives. These are formatted with ext4.
Your networking is fine. I have a 10/100 router connected to 4 port and 8 port gigabit switches. We see 70-90 MiB/s transfer rates.
The problem your seeing is with NFS. NFS is slow at creating files and directories by default.
Some things I tried, before just defaulting to cifs for all file sharing -
Use tcp instead of udp for NFS.
DO NOT use rsize and wsize. NFS auto negotiates the maxim r/w values. You will either specify a number too small or non optimal. For the life of me I can not figure out why every single document tells users to specify this number. Copy and paste blogging at it's best
Code:
If a wsize value is not specified, or if the specified wsize value is larger than the maximum that either client or server can
support, the client and server negotiate the largest wsize value that they can both support.
Mount with the intr option.
Try mounting with async or sync options (run tests to see which works better on your end). I personally prefer sync. It gives the appearance of slower transfers, but with async, file operations are cached, and not actually written. When I put a file to disk, I want it on disk, not in RAM Then again, we have a couple of clients here that share and access the same files. Your situation could be different.
Code:
In other words, under normal circumstances, data written by an application may not immediately appear on the server that hosts the file.
If the sync option is specified on a mount point, any system call that writes data to files on that mount point causes that data to be flushed
to the server before the system call returns control to user space. This provides greater data cache coherence among clients, but at a sig-
nificant performance cost.
Make sure atime options are turned off as well. This can slow down file operations. (noatime, nodiratime, norelatime ...) Google or read the man page to make sure these options do not impact programs you're using.
iperf -f m -s -t 30 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 0.08 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.111 port 5001 connected with 192.168.1.109 port 60509 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1124 MBytes 941 Mbits/sec
With a higher MTU, and sysctl net.ipv4.$ tuning you can reach higher numbers. On my network though, I have a mix of ethernet chipsets, and they don't all play well together when the server is tuned. Also, when specifying a higher MTU, the 10/100 clients, and wireless clients have issues connecting to the server. The defaults work good enough here
Take a glance at man nfs and trial/error a few of the options.
ethtool -a Server
Try ethtool -k eth0 if you were looking for the offload settings.
ethtool -h / man ethtool for more info.
NFS is a much, much lighter protocol than CIFS. On my file server putting a 500GiB file, the overhead is ~2% CPU, CIFS is ~10% CPU. If NFS is using more resources, revisit your configuration
I have been trying out your suggestions but have not managed to get any increase in the transfer speed for multiple small files
I have set NFS to use tcp
I have removed rsize and wsize mount options
I have mounted with intr option
I have disabled atime, diratime and relatime
I measured transfer with iperf. - This is about the same transfer speed that I get if I copy a large file over NFS.
Code:
tom@zaphod:/$ iperf -c 192.168.1.10
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.70 port 57373 connected with 192.168.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 545 MBytes 457 Mbits/sec
ethtool -k eth1 (desktop)
Code:
tom@zaphod:/$ sudo ethtool -k eth1
Offload parameters for eth1:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: off
large-receive-offload: off
ntuple-filters: off
receive-hashing: off
ethtool -k eth0 (server)
Code:
tom@marvin:~$ sudo ethtool -k eth0
Offload parameters for eth0:
rx-checksumming: on
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off
ntuple-filters: off
receive-hashing: off
I notice that tx-checksumming, scatter-gather and generic-segmentation-offload are off on the server but are both on for the desktop machine.
I am not entirely sure what these features do but could this mis-match contribute to the transfer speed problems?
Last edited by explosive_tom; 02-15-2011 at 12:25 PM.
I have been trying out your suggestions but have not managed to get any increase in the transfer speed for multiple small files
I measured transfer with iperf. - This is about the same transfer speed that I get if I copy a large file over NFS.
Code:
tom@zaphod:/$ iperf -c 192.168.1.10
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.70 port 57373 connected with 192.168.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 545 MBytes 457 Mbits/sec
Your iperf results are low for gigabit. You're getting 50% of what my results where with 2 cheap built in realtek 8168 cards. 1000 Mbits/sec is considered 100% efficiency. Your hitting roughly 50% not good at all.
You have a weak link somewhere. NICs, cables, or your switch.
Double check your ethernet cables, perhaps even replace them if you already have spares.
Make sure your switch is connected properly. Not all switches support auto uplink, and some that state they support auto link, do not. PC's should be plugged into ports 1-3, with the uplink connection in port 5. Uplink is the connection that uplinks your switch to your router
ethtool -k eth1 (desktop)
Try turning TX checksuming and scatter off. This can add an un-needed overhead on weak NICs. ethtool -K eth1 tx off
ethtool -K eth1 sg off
I have an Atheros built in NIC on an Asus board which, to be nice, is crap. It behaved much in the same way as your results. I replaced it with a $5 r8169 NIC. Cured all of my network speed issues.
Thank you for the suggestions - I have tried many things - still no joy
I have swapped the ethernet cables
I have reorganised connections in my switch so uplink is now in port 5
I have tried two new network cards in my desktop machine (see below)
There were a couple of spare gigabit nics lying around at work so I have borrowed them for the evening!
The two Marvell Technology nics are the 2 built in ethernet ports on my ASUS motherboard. (I have tested with both). The Realtek and D-link are ones that I have just put in.
I ran iperf on the two new cards and noticed an improvement! Still not as good as your results but better than I get with both onboard nics.
Realteak 8169
Code:
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.72 port 45109 connected with 192.168.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 727 MBytes 610 Mbits/sec
D-Link DGE-528T
Code:
------------------------------------------------------------
Client connecting to 192.168.1.10, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.73 port 33659 connected with 192.168.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 739 MBytes 620 Mbits/sec
I then tested copying files over NFS. I get near 100% identical transfer rate as with the onboard nics. ~50-60Mb/s for a large (600mb) file but still only ~2.5MB/sec for a folder of 2MB files
It seems odd that my iperf stats have gone up but file transfer is still bad. It just seems very odd that it affects groups of small files so badly and that tuning the NFS settings and switching off atime etc made no noticable difference at all :s
Thank you for your continued help!
EDIT:
I have done further testing. I have put one of the spare gigabit nics into the backup machine and nothing improves. The problem could possibly be to do with the r8169 driver that the backup server uses for its onboard nic. The problem is that both of the spare nics that I borrowed from work also use the r8169 driver!!!
Last edited by explosive_tom; 02-16-2011 at 02:17 PM.
I have borrowed my fiancee's pc that runs ubuntu 10.10 (same as the desktop I am using) and mounted an nfs share from that onto my pc. using the same 2 network cards I borrowed from work (1 in each pc) I managed to get 70mb/s copying a large iso from one to the other and 25mb/s copying the same folder of pictures - which is 10x as fast as copying to my backup machine - although still rather slow!
The backup machine runes Debian 6 squeeze and so I think must have problems with it's r8169 driver, although it appears to be an identical version to that which is in ubuntu 10.10 so I am very confused!
I have plenty of experience with r8169 and r8168 cards. They are not the same quality as an Intel NIC , neverless, they are not that bad.
To be honest, I was doing a bit of pondering on this issue. Some things come to mind.
1. Large file transfer results are pretty good - does not show an immediate problem.
2. Iperf results are pathetic. Iperf only exercises the NIC, wires, switch .... not the file system nor CPU (in theory).
3. Directory creation is usually slower, not as slow as what you are seeing. IF large file was ~60MiB/s and multiple dirs were in the neighborhood of ~40MiB, I'd chalk it up to one of those things. But you are seeing less than 3MiB/s (?? Right ??).
4. I do not believe your distro is at fault. Debian is widely used both at home and in a server setting. This issue would have been reported long ago -- I hope.
If this were my setup, I'd test a couple more items. Take the switch out of the picture -- if possible. There are crappy ones on the market, and even good manufacturers made a bad product from time to time.
Try another file system. In the past ext4 has been known to cause strange things to happen with one program or another. Not long ago, there was a kernel oops in net dev, but only if you were using NFS and ext4.
I have run modinfo - my r8196 driver is identical version to yours on both ubuntu machines but on my debian machine the srcversion line is different. (I presume this means it is using a different (probably older) version of the driver).
I have done further testing using 2 ubuntu 10.10 desktop machines both with r8196 chip cards in and I am getting more acceptable performance. (still not great for small files)
Iperf results for this setup was:
Code:
tom@zaphod:~$ iperf -c 192.168.1.51
------------------------------------------------------------
Client connecting to 192.168.1.51, TCP port 5001
TCP window size: 22.6 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.1.50 port 36679 connected with 192.168.1.51 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 964 MBytes 808 Mbits/sec
I tweaked the MTU (enabled jumbo frames) on both desktop machines to 7200 which gave a performance increase.
I then tried copying a large file over NFS and got 70mb/sec, my folder of small files was about 27mb/sec.
I then copied the same files using SCP and I get ~25mb/sec for both the large file and the small files!
I removed the switch from the equation and just linked the 2 desktops together with a cable and the performance was identical.
I have tried moving one of the r8196 gigabit nics out of the 2nd desktop back into my backup server and the performance for small files crashes back to 2mb/sec
I think that it must surely be a driver problem with the r8196 on debian 6?
I was wondering whether it might be worth picking up 2 of these (http://www.intel.com/Products/Deskto...T-overview.htm) Intel NICs and hope that it solves the problem. They are a little on the pricey side though :'(
My file system is ext4 on all 3 systems discussed.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.