Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The performance of my Gigabit Ethernet is really bad. I have 2 hosts with up-to-date gentoo installations, three kinds of gigabit ethernet cards (Marvell Yukon - skge, Realtek - r8169, Intel - e1000) a gigabit switch (3com) and 2 different cables (patch and crossover, both Cat6). I tried all possible combinations between the cards with and without the switch, however, when I scp a file from one host to another I get not more than ~19MB/s. When using 100MBit cards I get 11MB/s!
I'm really stuck now...
Any help greatly appreciated!
Thx, georg
P.S.:
The current setup is "host1(e1000) <-> switch <-> host2(skge)":
ethtool output on host1:
Code:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: umbg
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
ethtool output on host2:
Code:
Settings for eth0:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x00000037 (55)
Link detected: yes
Are you sure that you are not bottle-necking somewhere else.... Like the write capacity of the hard drive?
My home network could be gig, but since I have a laptop it is worthless (for what I would use it for). I can't go much beyond 100Mbps transfers anyway. My laptop can't write to HD that fast.
What am I looking at here?
Those throughput times look like read *not write* times to me.
Your write to hard drive throughput should be much slower. (Unless these are RAID setups of course)
When transferring files, I take it as a general rule of thumb that I am limited by the write speed of the hard drive of the machine that I am transferring to.
I have much better throughput when transferring files *from* my laptop than *to* my laptop.
You could also be running into CPU issues as well. Although my money is on Hard Drive is your bottle neck.
*EDIT* BTW, I have 2 raptors in Raid 0 on my home machine, so it is a lot more obvious to me when I transfer files back and forth. Although I have yet to check if my home machine's write speed is actually faster than my laptop's read speed.
What am I looking at here?
Those throughput times look like read *not write* times to me.
I really just copy-pasted from console, so they are write times.
Host1 has a standard IDE hard drive I bought last year, and on host2 there is a software raid5 with 3 disks.
However, suddenly it struck me that scp cannot be a good benchmark because it encrypts the data and maybe also adds some other overhead. So I did some tests with dd on the mounted NFS share:
read: ~50MB/s
write:~15MB/s
I still don't understand the bad write performance (I double-checked those local 45MB/s), however, I can live with this situation now...
When I said "Those throughput times look like read *not write* times to me." I meant that the numbers themselves look like read times.
It also seemed odd that your host1's single drive is writing faster than your raid array. Could be to the newer drive technology.
So a couple of assumptions:
1. Transfers from host1 to host2 should be faster due to host2 being able to write to its raid drive at a faster rate.
2. Transfers from host2 to host1 should bottleneck on the 15MB/s write speed of host1's standard IDE HD.
A comparison of these two assumption scenarios plus an IPerf benchmark (in both directions) might be interesting.
The only other thing I can think of is a bad NIC. I have seen a NIC give asymmetric throughput results before.
thorn168: You are assuming that these NICs are actually on a PCI bus.
g_k: what kind of NICs are in these machines and if they are built-in, what are the makes and models of the motherboards?
According to my experience Ethernet throughput is highly dependent on the size of the packets sent. maybe the files you are copying are small. I had the same problem a while ago.
here is the link that shows how the throughput is dependent on the size of the packets:
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.