Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am doing backups and restores to and from a backup server machine in the garage (in case of fire or theft) using rsync, and the LAN is running much slower than I think it should.
Best consistent speed I get is 40Mb/s. The lights on the interface cards indicate it's in Gb mode, and the interfaces are fairly new. The router is a Netgear WNDR3700 the top-of-the-line consumer-grade router.
Hardwire only, using Ubiquiti ToughCable and ToughConnectors, less than a 50' run.
What can I look for?
Last edited by Quantumstate; 05-15-2011 at 08:37 AM.
That sounds about right. Unless you are running SSDs or a performance oriented RAID configuration on both sides of the connection, the bottleneck will likely be the hard drives. Rsync is doing a lot of seeking and occasional transfer (after the initial sync), which impacts the transfer rate considerably.
You can do some tuning. You don't mention using jumbo frames - setting the MTU on the interfaces to greater than 1500, if supported by your hardware. That can significantly improve long transfers.
iperf is a good tool for testing LAN performance. It is cross platform and tests memory to memory data transfer via TCP or UDP. This takes storage speed out of the equation, allowing you to verify your LAN performance.
iperf is not available in Debian, and unfortunately Sourceforge won't let me look at anything without being signed in. I tried the three usernames and password I've used for the past ten years and it refuses them all. I tried to register anew, and it said, "Oh snap! We can't process this request." So I give up on that.
I do notice though that the CPU on the backup server is running rsync at 100% on one core, and very little on the other. It appears that rsync is not multithreaded and this may be my problem. Any idea why, or what can be done about it?
dont settle for 40MiB/Sec. a straight jump that wide should be able to hover around 80MiB/Sec. I have a Freenas box / good GB switch, and new motherboards at both ends and it flies!
unrelated: I have to disable/enable the nic on my client to get it "awake" and running healthy again. at least once a day this happens.
edit: ahci mode hard drives help drastically
I would take a hard look at the hard drive activity during big transfers, often the controller/hard drive is the primary bottleneck in this situation.
after you fix rsync so that it can handle 1GiB/Sec, move on to the other things I mentioned (and the FTP test mentioned earlier becassuse FTP has nearly no overhead like rsync)
CodeKrash, it appears that the CPU on my backup server is maxed on one core with rsync, if top isn't lying. Since rsync is not multi-thread that's the limiting factor. iotop shows on the backup server a read rate from 3 to 7 M/s, and on the destination an average write rate of 4.5 M/s. Both sides are WD Green 2TB drives capable of much more than this. I only see periodic flashes on both machines' drives. I'm pretty sure I'm CPU-bound on the source, one core.
Baldy I tried that before and it couldn't find iperf, even with apt-cache search, but I tried it just now and it installed. Ghosts, I guess.
Ouch!
you may have to add the repository for iperf to apt, I know that with yum this happens often to find some software.
I hope you don't settle for that magnitude of inefficiency.
I was told on the rsync listserv that it's spending time doing file deltas and compression. Recommendation was to change --compress to --whole-file, and in fact that increased my speed from 4.5Mb/s to 18.5!
Much better. Still core-bound though. (not disk, or network)
OK now that I've finished my restore, I've run iperf.
Droog --> Hex 937 Mbits/sec!
Merlin --> Hex 40.6 Mbits/sec (wifi)
So I'm getting amazing performance from the Gb connexion; I should because I used expensive Ubiquiti ToughCable and ToughConnectors for the 70' run to the garage where my backup server is. BTW, Ubiquity has some amazing products, especially the NanoBridge M2.
My wifi performance is not too good, considering that I run 5GHz only, 802.11n. It's a Netgear WNDR3700 router and the client card is the Intel 4595 built into my HP 8710w laptop.
Last edited by Quantumstate; 05-19-2011 at 09:51 AM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.