even more tc "fun" - trying to guarantee openvpn 8mbit on upload interface
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
even more tc "fun" - trying to guarantee openvpn 8mbit on upload interface
so i'm now trying to set the htb on the upload interface of my linux router PC. I have ~11mbps upload from my ISP. What i'm trying is to have all traffic have a default minimum of 1.5mbit, while openvpn traffic gets 9.4:
Code:
tc qdisc add dev external root handle 1: htb default 11
tc class add dev external parent 1: classid 1:1 htb rate 11.4mbit ceil 11.4mbit
tc class add dev external parent 1:1 classid 1:10 htb rate 9.4mbit ceil 11.4mbit
tc class add dev external parent 1:1 classid 1:11 htb rate 1.5mbit ceil 11.4mbit
but of course, this isn't working as I would expect. For example, if I start an upload from the LAN to an openvpn client, and then start another upload from within the LAN to another place (not an openvpn client), I would think that the local LAN upload would slow down to 1.5mbit, while the openvpn transfer would get the 9.4.
but this doesn't happen; both transfers balance out, as if there wasn't any traffic controlling going on at all.
So it's most likely that your traffic is all ending up in one queue is my guess. So the problem is most likely your classification.
If I remember rightly think 'tc -s qdisc' should give you stats for the queues. If you classification was working you should see traffic on both queues.
the device in question (external) shows a "requeues 3", if that's what you mean... that sounds right, queues 1:10, 1:11, and 1:12
idk. afaict, my iptables classifying logic seems right, and i'm doing it identically as my other classifications.... give data being output on external (-o external) with source IP of me (-s xx.xx.xx.xx) using udp with source port 1194 (-p udp --sport 1194) the classification of 1:10 (-j CLASSIFY --set-class 1:10).... i tried removing the "-s" option but no dice.
'iptables -t mangle -nvL' definitely shows traffic counters for the class:
Code:
Chain PREROUTING (policy ACCEPT 6004K packets, 5041M bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 204K packets, 36M bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 5800K packets, 5005M bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 456K packets, 647M bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 6256K packets, 5653M bytes)
pkts bytes target prot opt in out source destination
10 688 CLASSIFY all -- * internal 0.0.0.0/0 172.16.16.14 CLASSIFY set 1:11
2009K 1880M CLASSIFY all -- * internal 0.0.0.0/0 172.16.16.15 CLASSIFY set 1:11
437K 645M CLASSIFY udp -- * external 0.0.0.0/0 0.0.0.0/0 udp spt:1194 CLASSIFY set 1:10
0 0 CLASSIFY all -- * external 172.16.16.11 0.0.0.0/0 CLASSIFY set 1:12
2096K 1146M CLASSIFY all -- * external 172.16.16.15 0.0.0.0/0 CLASSIFY set 1:12
129K 9826K CLASSIFY all -- * external 192.168.192.0/24 0.0.0.0/0 CLASSIFY set 1:13
173K 275M CLASSIFY all -- * wifi 0.0.0.0/0 192.168.192.0/24 CLASSIFY set 1:2
maybe specifying "-i tun0 -o external" in iptables is it? as in, not doing it by port but instead by interface - incoming interface = the vpn interface, output = external?... i'll try it when I get home tonight
Last edited by psycroptic; 09-21-2013 at 06:49 PM.
apparently you can't use -i with the POSTROUTING chain.... beats me why.
in any case, i'll go ahead and post my internal and external tc scripts. i've added some burst values and an sfq, thinking maybe that would fix it. of course it didn't, nothing's changed.
i haven't got a clue why the same kind of subdividing of traffic works on the download (internal) but not the upload (external)... doesn't make sense.
Code:
tc qdisc add dev internal root handle 1: htb default 10
tc class add dev internal parent 1: classid 1:1 htb rate 53.96mbit ceil 53.96mbit burst 6k
tc class add dev internal parent 1:1 classid 1:10 htb rate 37.06mbit ceil 53.96mbit burst 6k
tc class add dev internal parent 1:1 classid 1:11 htb rate 12.5mbit ceil 53.96mbit burst 6k
tc class add dev internal parent 1:1 classid 1:12 htb rate 4.4mbit ceil 53.96mbit burst 6k
tc qdisc add dev internal parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev internal parent 1:11 handle 11: sfq perturb 10
tc qdisc add dev internal parent 1:12 handle 12: sfq perturb 10
tc qdisc add dev external root handle 1: htb default 11
tc class add dev external parent 1: classid 1:1 htb rate 11.7mbit ceil 11.7mbit burst 4k
tc class add dev external parent 1:1 classid 1:10 htb rate 9.7mbit ceil 11.7mbit burst 4k
tc class add dev external parent 1:1 classid 1:11 htb rate 1.5mbit ceil 11.7mbit burst 4k
tc class add dev external parent 1:1 classid 1:12 htb rate 500kbit ceil 11.7mbit burst 4k
tc qdisc add dev external parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev external parent 1:11 handle 11: sfq perturb 10
tc qdisc add dev external parent 1:12 handle 12: sfq perturb 10
Here's what I used when I was on a slow link. You'd need to adapt it to detect particular IP's, look at the matches.
Code:
#!/bin/sh
DEV=auto DOWNLINK=3360 UPLINK=864 # set these to your purchased bandwidth, kbit/s
UPFRAC=95 DOWNFRAC=95
while getopts xu:d:D:u:U:N opt ; do case $opt in
x) set -x ;; \?) sed /^$/q $0 ; exit 1 ;; N) really=echo ;;
D) DOWNLINK=$((OPTARG)) ;; U) UPLINK=$((OPTARG)) ;;
d) DOWNFRAC=$((OPTARG)) ;; u) UPFRAC=$((OPTARG)) ;;
esac; done; shift $((OPTIND-1)); test -- = "$1" && shift 1
test $DEV = auto && DEV=`ip route|sed -nr 's/^default.* dev ([^ ]*).*/\1/p'`
UPLINK_HIPRIO=$((UPLINK*UPFRAC/100))
UPLINK_LOWPRI=$((UPLINK_HIPRIO*UPFRAC/100))
DOWNLINK_THROTTLE=$((DOWNLINK*DOWNFRAC/100))
# reset $DEV queueing first
$really /sbin/tc qdisc del dev $DEV root 2> /dev/null > /dev/null
$really /sbin/tc qdisc del dev $DEV ingress 2> /dev/null > /dev/null
TQ="$really /sbin/tc qdisc add dev $DEV"
TC="$really /sbin/tc class add dev $DEV"
$TQ root handle 1: htb default 20
$TC parent 1: classid 1:1 htb rate ${UPLINK}kbit burst 48kbit
$TC parent 1:1 classid 1:10 htb rate ${UPLINK_HIPRIO}kbit burst 48kbit prio 1
$TC parent 1:1 classid 1:20 htb rate ${UPLINK_LOWPRI}kbit burst 48kbit prio 2
$TQ handle 10: parent 1:10 sfq perturb 10 # roundrobin srcdst-hash chains, rehash @10sec to ameliorate collision effects
$TQ handle 20: parent 1:20 sfq perturb 10 # .
# management packets get the full uplink rate. interactive packets get high-priority. all else gets policed.
TF="$really /sbin/tc filter add dev $DEV parent 1: protocol ip prio 10"
$TF handle 1: u32 divisor 1 # tcp management
$TF handle 2: u32 divisor 1 # tcp bare acks
$TF handle 3: u32 divisor 1 # udp classification
$TF handle 4: u32 divisor 1 # policed outbounds:
$TF u32 ht 4:: match u8 0 0 at 0 classid 1:20 #police rate $((UPLINK_LOWPRI))kbit burst $((UPLINK/4))kbit action drop
$TF u32 match ip protocol 1 0xff classid 1:1 # icmp gets full rate
$TF u32 match ip tos 0x10 0xff classid 1:10 # TOS lowdelay gets high-priority uplink
$TF u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 link 1: offset at 0 mask 0x0f00 shift 6
$TF u32 match ip protocol 17 0xff match u8 0x05 0x0f at 0 link 3: offset at 0 mask 0x0f00 shift 6
TCP="$TF u32 ht 1::"
TC2="$TF u32 ht 2::"
UDP="$TF u32 ht 3::"
$TCP match u8 0x01 0x01 at nexthdr+13 classid 1:1 # tcp management gets full rate
$TCP match u8 0x02 0x02 at nexthdr+13 classid 1:1
$TCP match u8 0x04 0x04 at nexthdr+13 classid 1:1
$TCP match u16 0x0000 0xffc0 at 2 link 2:
$TCP match u8 0 0 at 0 link 4:
# look for anything up to three-option ACKs to cover SACKs
$TC2 match u16 0x0028 0xffff at 2 match u16 0x5010 0xf010 at nexthdr+12 classid 1:1 # bare TCP acks get full rate
$TC2 match u16 0x6010 0xf010 at nexthdr+12 match u16 0x002c 0xffff at 2 classid 1:1 # .
$TC2 match u16 0x7010 0xf010 at nexthdr+12 match u16 0x0030 0xffff at 2 classid 1:1 # .
$TC2 match u16 0x8010 0xf010 at nexthdr+12 match u16 0x0034 0xffff at 2 classid 1:1 # .
$TC2 match u8 0 0 at 0 link 4: # all else gets throttled
$UDP match u16 60311 0xffff at nexthdr+0 link 4:
$UDP match u16 60311 0xffff at nexthdr+2 link 4:
$UDP match u8 0 0 at 0 classid 1:10 #non-bt UDP
#
# ingress filtering: all but bulk are unmetered
$really /sbin/tc qdisc add dev $DEV handle ffff: ingress
TF="$really /sbin/tc filter add dev $DEV parent ffff: protocol ip prio 10"
$TF handle 1: u32 divisor 1 # tcp management
$TF handle 2: u32 divisor 1 # tcp bare acks
$TF handle 3: u32 divisor 1 # udp classification
$TF handle 4: u32 divisor 1 # policed inbounds
$TF u32 ht 4:: match u8 0 0 at 0 police rate $((DOWNLINK_THROTTLE))kbit burst $((DOWNLINK/8))kbit drop classid :0
$TF u32 match ip tos 0x10 0xff classid :0
$TF u32 match ip protocol 1 0xff classid :0
$TF u32 match ip protocol 6 0xff match u8 0x05 0x0f at 0 link 1: offset at 0 mask 0x0f00 shift 6
$TF u32 match ip protocol 17 0xff match u8 0x05 0x0f at 0 link 3: offset at 0 mask 0x0f00 shift 6
TCP="$TF u32 ht 1::"
TC2="$TF u32 ht 2::"
UDP="$TF u32 ht 3::"
$TCP match u8 1 0x01 at nexthdr+13 classid :0
$TCP match u8 2 0x02 at nexthdr+13 classid :0
$TCP match u8 4 0x04 at nexthdr+13 classid :0
$TCP match u16 0 0xffc0 at 2 link 2:
$TCP match u8 0 0 at 0 link 4:
# look for anything up to three-option ACKs to cover SACKs
$TC2 match u16 0x0028 0xffff at 2 match u16 0x5010 0xf010 at nexthdr+12 classid :0 # bare TCP acks get full rate
$TC2 match u16 0x6010 0xf010 at nexthdr+12 match u16 0x002c 0xffff at 2 classid :0 # .
$TC2 match u16 0x7010 0xf010 at nexthdr+12 match u16 0x0030 0xffff at 2 classid :0 # .
$TC2 match u16 0x8010 0xf010 at nexthdr+12 match u16 0x0034 0xffff at 2 classid :0 # .
$UDP match u16 60311 0xffff at nexthdr+0 link 4:
$UDP match u16 60311 0xffff at nexthdr+2 link 4:
$UDP match u8 0 0 at 0 classid :0 #non-bt UDP
yeah you lost me on most of that man... i haven't done much work with tc, and have had very little luck in finding good docs. you say i would need it to "adapt" to detect destination IPs.... why? i wouldn't think it should matter what the destination of the UDP openvpn packets are, i'm just wanting to place all of that traffic in 1:10, giving it a minimum HTB rate on the external interface.... seems simple enough with iptables --sport classification, but apparently not.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.