LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   File transfer rates? (https://www.linuxquestions.org/questions/linux-networking-3/file-transfer-rates-293464/)

Matir 02-22-2005 12:15 PM

File transfer rates?
 
I have several computers hooked up to a 10/100 switch. I am using one as a fileserver for backups but am noticing issues in speed. Copies over the network take place at about 1.7 MB/sec (according to scp). Doesn't this seem a bit low?

I'll test with different machines later, but I just wanted thoughts on what I *SHOULD* be seeing.

marghorp 02-22-2005 01:41 PM

you should be seing a 100Mbits transfers, somewhere around 10MB/sec, unless there is something wrong with the wiring.

Matir 02-22-2005 02:15 PM

That's what I thought. But all the cable seems to be good. Could the processor or hard drives on one end be causing a bottleneck? What can I test?

broch 02-22-2005 02:18 PM

100 Base-T = 100mbits/sec
8bits=1byte
100/8 = 12.5MB/sec
Add TCP overhead plus your NIC must be 100% efficient (which is not the case) and you will end up with at best 10MB/sec
usually you may get 8-9.5MB/sec.
If you have direst connection then of course it may even doble the number.
I my case SuSE/SAMBA on LAN with Windows XP = 8.5MB/sec on average.

You would have also to tweak sysctl variable.

Matir 02-22-2005 02:54 PM

Which sysctl variables should I be looking at?

technochef 02-22-2005 03:45 PM

Abstract thought...

I've had bottlenecks in regard to iptables - if the iptables rules on the file server are complicated, it can slow things down.

Just something to check.

Matir 02-22-2005 04:03 PM

Neither machine has anything approaching complex iptables rules, so I feel that's safe to rule out. In fact, removing all iptables rules from the file server (but not that other) makes no appreciable difference.

jschiwal 02-22-2005 04:23 PM

What device is actually writing the backups. If it is a tape device, the speeds you are seeing are actually good. A DLT drive will backup around 80 megabytes a minute.

I read that the maximum throughput for an ethernet connection was 37%. So 100000 * 0.37 / 8 = 4.625 megabytes / sec.

A backup program will typically use 2x1 compression, so 9.25 MB/sec would be the limit which comes very close to broch's estimate of 8-9 MB / sec. The 37 percent is from memory, and may be wrong, and using a switch may bring it up a little. For example, using a switch, your NIC cards could be running Full Duplex connections. However your communication is mostly one way.

This does bring up a pet peave of mine however. Not knowing whether MB is megabits or megabytes.
I would like to see Mb represent megabits and MB represent megabytes.

broch 02-22-2005 06:07 PM

Quote:

Which sysctl variables should I be looking at?
well there is a lot of them. (I hope that you killed ipv6?)
First question is what are you looking at? Intenet or LAN or both?
commands and files:
to see sysctl values enter (as a root):
sysctl -a | grep value_name
to add value permanently edit /etc/sysctl.conf
to make sysctl.conf changes working right away run
sysctl -p

In the ideal conditions (LAN with good wiring/routers and such) I would add:
net.ipv4.tcp_window_scaling = 1
and then of course this
net.ipv4.core.rmem_max = 16777216
net.ipv4.core.rmem_default = 87380
net.ipv4.core.wmem_max = 16777216
net.ipv4.core.wmem_default = 65535
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65535 16777216
net.ipv4.tcp_mem = 8388608 8388608 16777216

the above will hamper network performance in the conditions less than ideal e.g. internet because some routers, firewall are buggy so:
In this case do the exact opposite:
net.ipv4.tcp_window_scaling = 0
then don't worry rmem/wmem values because you just turned scalling off.
you still need this:
net.ipv4.tcp_sack = 1
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 3
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.core.netdev_max_backlog = 1024
net.ipv4.tcp_max_tw_buckets = 360000
kernel.shmall = 67108864
kernel.shmmax = 67108864
and you can play around with maximum number of open files (but that depends on the kernel (2.6 is pretty well optimized for open files, so no need to modify this)
fs.file-max = 65000
fs.file-max = 65000

check txqueuelen value. You can decrease sendstalls if you will change the value to max 2000 (above 2000 there is no difference). First try 1000 then 2000. This will not increase speed
the command:
#/sbin/ifconfig eth0 txqueuelen 2000

If you have Realtek NIC don't expect miracles. So first is the hardware, then the rest. You can start with the walues above.

before you proceed with a new settings do
echo 1 > /proc/sys/net/ipv4/route/flush
don't add this to sysctl, because this is one time deal after boot, so each time you need to repea it.

and of courseMTU value

see what this test will produce:
http://www.dslreports.com/tweaks/
if transfer efficiency is less than 100% then you may need to adjust setings.

remember check NIC (eventually get better), try settings above and tweak them to get the best results. Tweak FTP server too of course.
Hope that will help a little

uberNUT69 02-22-2005 11:39 PM

you could try measuring cpu with top, and network load with iptraf.
What are the lights on the switch saying?
Are _ALL_ interfaces negotiating 100Mb full duplex?
(if not ... is your switch N-Way capable?)

ps. as far as transfer rates on almost any interface goes, the rule of thumb is to divide the max theoretical bandwidth (in Mb) by 10 to calculate approx transfer rate (in MB/s). This takes into account many variables and overheads. ie. 100Mb nic should be about 10MB/s.

Try nfs transfer for test ... scp prob has higher cpu overhead (?).

Test disk speeds?


umm ...

have you got something like snort or fam clogging up the works somewhere?

....

You could prob keep going through a great list ... but such a low xfer speed suggests to me that it's something obvious, not a problem with finetuning sysctls.

ps. gkrellm (and gkrellmd) is very handy for testing this sort of problem.

Matir 02-23-2005 09:49 AM

My switch is N-Way capable. Copying over NFS gives marginally better performance, and it doesn't look like the CPU is hitting the limit. Apparently my "source" machine is part of the problem: copying from my desktop (P4 2.8/1 GB DDR/etc.) yields ~3.3 MB/s using scp. I'll have to look into the I/O on the file server now, I suppose.

uberNUT69 02-23-2005 09:09 PM

Perhaps you could try a test setup like this:

http://img.photobucket.com/albums/v4...th_testing.jpg
(nfs transfers to and from)
or
http://img.photobucket.com/albums/v4...h-test-scp.jpg
(scp transfer from client to server)


notes:
"client":
duron 750, asus a7v333, 256MB PC2100, 2x60GB ata133, 100Mb 3c905, Debian Sid
"server":
p3-650, asus p2b-vm, 512MB PC133 2x4GB ATA33, 1x40GB ATA100 (on ATA66 interface), 100Mb 3c905, Debian Sid
(this machine is normally in a separate zone ... off a non-Nway switch on the other side of a P133 firewall)
switch:
10/100 Nway

- I haven't configured snort properly on the server, so stopping snort service improves nfs xfer rate.
- famd occasionally a problem on client ... fixed by restarting famd

you can see the difference in gkrellm between read/write on the server's slow drive

Either way it's fast enough for me at the moment ... I tend to swap and change my machines and network so much that simplicity is the important factor. Fast enough for DVD data burning and remote desktop apps :)

floog 02-24-2005 11:46 AM

How's your hard drive performance in general; this could be affecting how much data you're pumping.
I had a couple of IBM drives a few years back that took some special configging to keep the DMA capabilities on and functioning.
Disc reads went from 2.2 megs./Sec. to 45 megs./Sec. once I got DMA squared away.

hdparm -tT /dev/hda (or /dev/hdb, etc, depending on what your drive is labeled in /etc/fstab).

Mike

Matir 02-24-2005 03:23 PM

Code:

fserv root # hdparm -tT /dev/hdb

/dev/hdb:
 Timing cached reads:  360 MB in  2.01 seconds = 179.40 MB/sec
 Timing buffered disk reads:  56 MB in  3.04 seconds =  18.40 MB/sec


floog 02-25-2005 04:46 PM

hmmm.........doesn't appear to be a problem.
18.4 MB/sec. is not great performance, but it is no where near the zone of poor performance either.
I would say the throughput problem lies outside the hard drive. But hey, at least it's ruled out and there's no second-guessing about it.

Mike

Quote:

Originally posted by Matir
Code:

fserv root # hdparm -tT /dev/hdb

/dev/hdb:
 Timing cached reads:  360 MB in  2.01 seconds = 179.40 MB/sec
 Timing buffered disk reads:  56 MB in  3.04 seconds =  18.40 MB/sec




All times are GMT -5. The time now is 10:16 AM.