LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Networking (https://www.linuxquestions.org/questions/linux-networking-3/)
-   -   Traffic shaping with HTB, several questions (https://www.linuxquestions.org/questions/linux-networking-3/traffic-shaping-with-htb-several-questions-624009/)

exscape 02-26-2008 01:08 PM

Traffic shaping with HTB, several questions
 
Hey everybody. :)
I've been working on my traffic shaping solution a bit today. I'm using IMQ for ingress shaping, and HTB/SFQ to do the actual shaping. Here's the important part of my script:

Code:


# Variables
DOWNRATE=12500kbit
DOWNRATE_CEIL=13000kbit
DOWNRATE_LOWPRIO_CEIL=12500kbit # for P2P and such

UPRATE=4500kbit
UPRATE_CEIL=4750kbit
UPRATE_LOWPRIO_CEIL=4500kbit # for P2P and such

# Clear stuff
tc qdisc del dev imq0 root 2>/dev/null
tc qdisc del dev imq1 root 2>/dev/null

# Set queue length
ifconfig net txqueuelen 16
ifconfig imq0 txqueuelen 16
ifconfig imq1 txqueuelen 16

# Create root qdisc
tc qdisc add dev imq0 root handle 1:0 htb default 20
tc class add dev imq0 parent 1:0 classid 1:1 htb rate 50000kbit burst 10k
#
# What should the "rate" be here (above), by the way...?
#

tc qdisc add dev imq1 root handle 1:0 htb default 20
tc class add dev imq1 parent 1:0 classid 1:1 htb rate 50000kbit burst 10k
# ... and here?


# Create classes, incoming
tc class add dev imq0 parent 1:1 classid 1:10 htb rate $DOWNRATE ceil $DOWNRATE_CEIL burst 10k prio 0
tc class add dev imq0 parent 1:1 classid 1:20 htb rate $DOWNRATE ceil $DOWNRATE_CEIL burst 10k prio 1
tc class add dev imq0 parent 1:1 classid 1:30 htb rate 1kbit ceil $DOWNRATE_LOWPRIO_CEIL burst 10k prio 2

# Create classes, outgoing
tc class add dev imq1 parent 1:1 classid 1:10 htb rate $UPRATE ceil $UPRATE_CEIL burst 10k prio 0
tc class add dev imq1 parent 1:1 classid 1:20 htb rate $UPRATE ceil $UPRATE_CEIL burst 10k prio 1
tc class add dev imq1 parent 1:1 classid 1:30 htb rate 1kbit ceil $UPRATE_LOWPRIO_CEIL burst 10k prio 2

# add SFQ
tc qdisc add dev imq0 parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev imq0 parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev imq0 parent 1:30 handle 30: sfq perturb 10
tc qdisc add dev imq1 parent 1:10 handle 10: sfq perturb 10
tc qdisc add dev imq1 parent 1:20 handle 20: sfq perturb 10
tc qdisc add dev imq1 parent 1:30 handle 30: sfq perturb 10

# Redirect traffic into their correct classes
tc filter add dev imq0 protocol ip parent 1:0 prio 1 handle 1 fw flowid 1:10
tc filter add dev imq0 protocol ip parent 1:0 prio 2 handle 2 fw flowid 1:20
tc filter add dev imq0 protocol ip parent 1:0 prio 3 handle 3 fw flowid 1:30
tc filter add dev imq1 protocol ip parent 1:0 prio 1 handle 1 fw flowid 1:10
tc filter add dev imq1 protocol ip parent 1:0 prio 2 handle 2 fw flowid 1:20
tc filter add dev imq1 protocol ip parent 1:0 prio 3 handle 3 fw flowid 1:30

# ... lots of marking here ...
iptables -t mangle -A POSTROUTING -p tcp -m tcp --sport 49000 -j MARK --set-mark 3
iptables -t mangle -A POSTROUTING -p tcp -m tcp --dport 49000 -j MARK --set-mark 3

# Enable NAT
echo 1 > /proc/sys/net/ipv4/ip_forward
iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o net -j MASQUERADE

# Move traffic into IMQ interfaces
iptables -t mangle -A PREROUTING -i net -j IMQ --todev 0
iptables -t mangle -A POSTROUTING -o net -j IMQ --todev 1
ip link set imq0 up
ip link set imq1 up

(note the embedded questions above)

Now... When I DON'T use the traffic shaping script, my max DL speed is 2800-3000kB/s, or just up to 24Mbps (I have a 24/8 cable connection).
However, to get my latency stable enough (at 9-20ms), I have do shape the DOWNRATE to 12500kbps or less! In other words, I can download at ~24000kbps (with 200+ ms ping), OR shape it to less than 13000kbps and get a stable ping. Any higher and the ping goes out of whack. Why, and what can I do about this? I expect to get at LEAST 20000kbps out of this, is that too much to ask or not?

I'm not by any means good at this stuff, in fact I don't quite understand the script above completely, even though I've written most of it. Any help is appreciated. :)

Edit: The prioritizing doesn't seem to work. I tried setting P2P to *high* priority and ICMP *low*, and max out my upload using bittorrent/FTP, but I got half-decent (~100ms) pings, which is the same with the priorities reversed (normal) if my cap is too high. Shouldn't the ICMP echo packets be placed last in the queue and thus have enormous latencies...?

Also, something weird is up, it seems to start ignoring my cap after a while (a bunch of script restarts). With a cap of 2000kbit, it keeps uploading at 4000, cap at 3700 and it uploads at 7600 etc... What the heck?

SonJelfn 05-28-2008 12:10 PM

Hello,

let's see if I can help you out with your questions.

The rates on the imq0 and imq1 should be the ones provided by your ISP line minus an estimated 20% overhead. You this even though your physical connection (I'm guessing ethernet (100Mbit) or usb2 (450Mbit)) to the internet can probably output far greater. This is done so that you don't overqueue the modem in any instance.

That would mean:

Code:

tc qdisc add dev imq0 root handle 1:0 htb default 30
tc class add dev imq0 parent 1:0 classid 1:1 htb rate 19200kbit burst 10k

and

Code:

tc qdisc add dev imq1 root handle 1:0 htb default 30
tc class add dev imq1 parent 1:0 classid 1:1 htb rate 6400kbit burst 10k

to begin with.

Now with the classes you have to really think about what you want to prioritize and how. Remember that in HTB the "rate" is the guaranteed bandwidth that you will assign the traffic in that class even when the link is full. This would mean that the sum of all rates in each class should equal or less than the rate of the parent class.

If you miscalculate that, you will start obtaining inconsistent results, especially if you push to saturate your link.

The problems with your stable ping are common and unfortunately not entirely fixable. The thing is that P2P in general creates tons of connections which will make the class in which you put it into ask for a lot more transmission time than the other classes.

The best example I can come up with is a bittorrent while playing an online game. A standard bittorrent will at least create 10 connections while games in general need only one. Then there is also the issue that the 10 Bittorrent connections are TCP based so they actually have a state, whereas a game is usually UDP. This will mean that the 10 Bittorrent connections will constantly ask to transmit (and will heavily do so) whereas the game, even on highest priority will have to take away the transmission token from the bittorrent class (which takes time) transmit and then the tranmission token will probably pass back to the bittorrent class until someone else with a higher priority comes along to take away the transmission token.

If you have the bittorrent traffic in with the game, the problem then comes again in the SFQ or PFIFO where basically the excess P2P traffic will ask more transmission time than the game.

That said, the only way to fix the stable ping problem is to do what you have done. You could also change the queuing discipline in a particular class to a PFIFO instead of SFQ and divide the bittorrent class into the lowest priority class and with a low rate, while maintaining the ceil at the connection speed. You should also put all non-prioritized traffic into that same class, say the HTB default 30 in your example.

The HTB has been fully tested and works extremely well. The problem you seem to be having is the classification of the packets. Also I noticed that by default things go to the HTB 20 class, so I think it should go in the 30.

Depending on how you prioritize the traffic if ICMP is on the lowest class and you have bittorrent on a higher class, then the ICMP should have a biggish delay, like you said.

I hope this helps.

Good luck.


All times are GMT -5. The time now is 08:48 PM.