Dynamic bandwidth throttling for SFTP-uploads - looking for best practise
I am uploading the incremental backups using duply/duplicity using the sftp-module. As the initial upload is pretty big and runs several days (more than 50GB over a 1Mbps-line) I am confronted with the problem that other users in the network experience slowdowns when I upload.
What I am trying to do:
I would like to run a script every n minutes which pings a host in the internet (second hop of the traceroute for example). If the response time is less than a value (150ms), the script throttles the upload for one specific host and protocol. Traffic to the local net (Samba mainly) should be unaffected. I cannot use the QoS of the firewall/router. Also I would like the penalty to be removed if the ping is quicker (loess than 70ms for example)
What I am trying to do to solve this issue:
I looked at trickle, and some other out-of-the-box shaping tools but they do not give me the possibility to change the rate while the upload is running.
I would now write a script in perl which uses http://search.cpan.org/~mrash/IPTabl...es/ChainMgr.pm some wrapper for iptables combined with some ping module http://search.cpan.org/~smpeters/Net...ib/Net/Ping.pm
Also I was trying to get the proof of concept before I start coding: (I haven't verified if this works yet)
sudo tc qdisc add dev eth0 root handle 11: cbq bandwidth 100Mbit avpkt 1000 mpu 64
sudo tc class add dev eth0 parent 11:0 classid 11:1 cbq rate 100kbit allot 1514 prio 1 avpkt 1000 bounded
sudo tc filter add dev eth0 parent 11:0 protocol ip prio 16 u32 match ip dst MyserverIP flowid 1:1
I tried to look for existing approaches and couldn't find anything which makes me suspicious. Any feedback would be greatly appreciated.
Thank you and Regards
While it's unclear if U/L happens in your LAN or over the Internet and if there are any other constraints involved that are not mentioned I'd like to argue for Something Completely Different (no Larches involved) if I may:
- Since most traffic will be TCP-based ICMP will not give you information about network route conditions you need and I'd say even UDP-based traceroute would not like tcptraceroute could.
- Since you can't use QoS (and have not indicated if the remote receiver honors QoS at all) bandwidth shaping can only be applied to egress traffic: you can't govern what the remote sender does.
- Bandwidth shaping will only lengthen the time needed for U/L.
- Abrupt changes in bandwidth shaping (based on ICMP-derived changes in the "Internet weather", which may change as abruptly as routing paths may change) may adversely affect the TCP congestion avoidance algorithm the kernel uses to determine efficiency.
So performance-wise you're rather looking at performance penalties than performance gains. Protocol-wise there is a quicker way of sending the 50GB initial upload over: FTP will have less overhead compared to SFTP. Looking at network utilization a different way you could chunk uploads and benefit from maximum bandwidth capacity at night. That is if network users do 9 to 5. But you shouldn't scoff at the 'ol sneaker net method using tapes, HDDs or BD if anyone can access the remote server directly or from within the same subnet.
Thank you for your feedback unSpawn,
I see the implications and problems. It is just that if I don't do anything, my upload will be clogging requests and the downloads (the other users in the network use the bandwidth mostly for surfing/email) will experience timeouts. Sadly there is no way around this - wen need to backup our files off-site (and the only server is in an other country - so that would be a long drive every evening).
Actually it would be fine if my upload is slower than it is today. I am not trying to speed up the upload - I am trying to find a reasonable rate for eme to upload. As hard-coded bw-limiter won't do the trick as I again hit the ceiling if someone starts a youtube video or something. I would like my upload to take lets say 50% of the available BW.
|All times are GMT -5. The time now is 10:31 AM.|