LinuxQuestions.org
Download your favorite Linux distribution at LQ ISO.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 02-13-2021, 12:49 AM   #1
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,333
Blog Entries: 3

Rep: Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729
traffic shaping with tc, on a server


I am looking for some simple example recipe which gives different prioritizations for outgoing traffic for several different ports on a system with a single network interface. What is the right way to do that for the kernel Linux 5.4 and later?

I presume that it is tc but the material is all rather old. I have done a lot of searching and found mostly outdated material. I have also pored over the manual pages for tc and its components are much more convoluted than for PF, which is also quite familiar. Because the tc tool set is still so unfamiliar, I would really prefer some concrete example for a system with a single interface.
 
Old 02-13-2021, 07:16 PM   #2
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=15, FreeBSD_12{.0|.1}
Posts: 6,269
Blog Entries: 24

Rep: Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206
As no one more knowledgable has stepped up I'll offer what I can. Disclaimer: I use tc and have been able to get it to do what I want in most cases, but make no claim to being expert or even proficient. What follows is from notes made during my own learning experience with tc.

I found online tutorials for tc to be woefully lacking and some even incorrect. But I would recommend the following on the basis that I found them valuable to my own understanding:

ArchWiki: Advanced Traffic Control

Hierachical token bucket theory

Linux.com QoS and Traffic Control with Linux tc

And of course man tc (much more useful that it may seem on first look!).

By way of a working example with commentary, here is my own offering using eth0 as the target interface. It will follow this diagram borrowed from the above link, QoS and Traffic Control, which shows the relationships among the parts we will create:

Code:
  .-------------------------------------------------------.
  |                                                       |
  |  HTB                                                  |
  |                                                       |
  | .----------------------------------------------------.|
  | |                                                    ||
  | |  Class 1:1                                         ||
  | |                                                    ||
  | | .---------------..---------------..---------------.||
  | | |               ||               ||               |||
  | | |  Class 1:10   ||  Class 1:20   ||  Class 1:30   |||
  | | |               ||               ||               |||
  | | | .------------.|| .------------.|| .------------.|||
  | | | |            ||| |            ||| |            ||||
  | | | |  fq_codel  ||| |  fq_codel  ||| |  fq_codel  ||||
  | | | |            ||| |            ||| |            ||||
  | | | '------------'|| '------------'|| '------------'|||
  | | '---------------''---------------''---------------'||
  | '----------------------------------------------------'|
  '-------------------------------------------------------'
First, we will flush any existing tc rules that may be laying around from previous experimentation:

Code:
#Remove all existing qdiscs, classes and filters from interface

tc qdisc del dev eth0 ingress 2>/dev/null
tc qdisc del dev eth0 root 2>/dev/null
We will not add any ingress rules here, so you may skip that one as appropriate, but be sure to start with a clean slate on egress (root).

Next we need to add a root qdisc (queueing discipline) within which to build our rules:

Code:
# 1) Add/Replace root qdisc of eth0 with an HTB instance,
#    specify handle so it can be referred to by other rules,
#    set default class for all unclassified traffic

tc qdisc replace dev eth0 root handle 1: htb default 30
Next we add a top level class to determine the total bandwidth available. You want to set this slightly below the actual bandwidth the interface or downstream path can support in order to more or less guarantee that tc remains in control (it can only restrict after all). This bucket will fill at the specified rate and all child buckets (classes) will fill from this one in their priority order.

Code:
# 2) Create single top level class with handle 1:1 which limits
#    total traffic to slightly less than the path max
#    For this example we assume 100mbit max and limit to 95mbit

tc class add dev eth0 parent 1: classid 1:1 htb rate 95mbit
Now we add child classes with their own max bandwidth limits, and assign them a priority. The highest priority buckets (i.e. lowest number) are filled first, with lower priority buckets filling from what is left over. This guarantees that the highest priority classes always get their allocated bandwidth and lower priority buckets are throttled when necessary to provide that.

Code:
# 3) Create child classes for different uses:
#    Class 1:10 is our outgoing highest priority path, outgoing SSH/SFTP in this example
#    Class 1:20 is our next highest priority path, web admin traffic for example
#    Class 1:30 is default and has lowest priority but highest total bandwidth - bulk web traffic for example

tc class add dev eth0 parent 1:1 classid 1:10 htb rate 1mbit ceil 20mbit prio 1
tc class add dev eth0 parent 1:1 classid 1:20 htb rate 1mbit ceil 20mbit prio 2
tc class add dev eth0 parent 1:1 classid 1:30 htb rate 1mbit ceil 95mbit prio 3
In the absence of any filters all traffic will use the default path and share the total bandwidth. We will add filters below to direct priority traffic to the priority paths.

By default HTB will attach a leaf qdisc of type pfifo, but there are others available (man tc -> see also...). Some authors say fq_codel (Fair Queuing with Controlled Delay) is worth the effort so we will add that here, but this step is optional.

Code:
# 4) Attach a leaf qdisc to each child class
#    HTB by default attaches pfifo as leaf so this is optional.
#    fq_codel is said to be worth the effort.

tc qdisc add dev eth0 parent 1:10 fq_codel
tc qdisc add dev eth0 parent 1:20 fq_codel
tc qdisc add dev eth0 parent 1:30 fq_codel
Finally, we need to add filters to direct selected traffic into the desired paths. One or more filters may attach to a parent qdisc and all traffic passing through a qdisc passes through its filters, allowing a hierarchy of filters. Filters may specify many criteria for matching traffic and which class matching traffic is sent to. Here we will create only two filters, one for our high priority outgoing SSH/SFTP traffic and another for selected admin web traffic, which we will match by iptables MARK values.

Code:
# 5) Add filters for priority traffic

tc filter add dev eth0 parent 1: handle 100 fw classid 1:10
tc filter add dev eth0 parent 1: handle 200 fw classid 1:20
In the above rules handle sets the value to match and fw specifies the thing to be matched, firewall mark, which matches iptables MARK or CONNMARK values. See man tc-fw for more information.

The result is that all packets MARKed 100 by iptables will be sent to the highest priority path, class 1:10, those MARKed 200 will follow class 1:20, and all other traffic will follow the default lowest priority path, class 1:30.

All that is left is to MARK outgoing traffic using iptables rules. Here are a couple of examples that can be used with the above filters, adapt to your needs, of course:

Code:
iptables -t mangle -A OUTPUT -p tcp --match multiport --dports 22,2222 -j MARK --set-mark 100
iptables -t mangle -A OUTPUT -p tcp --dst ${admin_ip} --match multiport --sports 80,443 -j MARK --set-mark 200
Now all outgoing SSH/SFTP traffic destined for eth0 will be guaranteed 20mbit bandwidth regardless of other traffic on the server. Outgoing HTTP/S destined for our admin IP will be similarly guaranteed its allocated bandwidth. All other traffic will compete for what is left after those have been satisfied.

I hope that this will be a useful example (and that I have made no obvious errors!).

Tc is really useful and not overly mysterious once you learn its terminology, and how it actually works. Read the man page until it makes sense and you will find it actually quite useful - and explore all the "see also" entries!

Good luck!

UPDATED:
I forgot to add a few simple commands for checking the status of your tc qdiscs/classes/filters:

Code:
tc qdisc ls dev eth0
tc -s qdisc ls dev eth0

tc class ls dev eth0
tc -s class ls dev eth0
tc -s -g class ls dev eth0

tc filter ls dev eth0
Other options available for these, see the fine man pages!

Last edited by astrogeek; 02-14-2021 at 01:18 AM. Reason: typos, update, more typos
 
2 members found this post helpful.
Old 02-14-2021, 02:46 AM   #3
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,333

Original Poster
Blog Entries: 3

Rep: Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729Reputation: 3729
Thanks.

A followup question, since it looks like I will have to work with IPtables first and then, once that functions, work out how it should go for NFTables could the packets be sorted into the queues using -j CLASSIFY instead?

Code:
iptables -t mangle -A OUTPUT -p tcp --match multiport --dports 22,2222 \
        -j CLASSIFY --set-class 1:10

iptables -t mangle -A OUTPUT -p tcp --dst ${admin_ip} --match multiport --sports 80,443 \
        -j CLASSIFY --set-class 1:20
If so, which would be more efficient?

Anyway, that looks really useful. It will take a bit of time to work through it, but I think I see the parts now.
 
Old 02-14-2021, 01:41 PM   #4
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=15, FreeBSD_12{.0|.1}
Posts: 6,269
Blog Entries: 24

Rep: Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206Reputation: 4206
I have not used nftables beyond a beginning attempt to learn my way around it. I do recall that nftables claims to provide a "better" interface to tc than iptables, but I found their simple example completely obtuse to me, look here.

I have not used the CLASSIFY/CBQ methods, although it looks simple enough. The only thoughts that come to mind from my own experience are that using CLASSIFY --set-class would tightly bind the iptables rules and the "structure" of the tc classes, such that if I changed the tc classes I would also have to change the iptables rule. It seemed to me at the time that matching a mark value allowed me to shuffle tc classes and filters without having to modify the classifying rules. That was probably more important during the learning period than it would be in actual use, but for what it is worth.

As for being more efficient - I have no well considered opinion. A quick look at man tc-cbq...

Code:
CLASSIFICATION
       Within  the  one  CBQ  instance many classes may exist. Each of these classes contains another qdisc, by
       default tc-pfifo(8).

       When enqueueing a packet, CBQ starts at the root and uses  various  methods  to  determine  which  class
       should receive the data.

       In  the absence of uncommon configuration options, the process is rather easy.  At each node we look for
       an instruction, and then go to the class the instruction refers us to. If the class found  is  a  barren
       leaf-node (without children), we enqueue the packet there. If it is not yet a leaf node, we do the whole
       thing over again starting from that node.

       The following actions are performed, in order at each node we visit, until one sends us to another node,
       or terminates the process.

       (i)    Consult filters attached to the class. If sent to a leafnode, we are done.  Otherwise, restart.

       (ii)   Consult the defmap for the priority assigned to this packet, which depends on the TOS bits. Check
              if the referral is leafless, otherwise restart.
       ...
... and man tc-cbq-details shows this...

Code:
CLASSIFICATION ALGORITHM
       Classification is a loop, which terminates when a leaf class is found. At any point the loop may jump to
       the fallback algorithm.

       The loop consists of the following steps:

       (i)    If the packet is generated locally and has a valid  classid  encoded  within  its  skb->priority,
              choose it and terminate.

       (ii)   Consult  the  tc  filters, if any, attached to this child. If these return a class which is not a
              leaf class, restart loop from the class returned.  If it is a leaf, choose it and terminate.
... which seems contradictory but I interpret to mean that beginning from root, each non-leaf class in a hierarchy looks at filters first, if any, then looks at other classifiers, whereas leaf classes consult skb->priority first, then filters.

I am not sure I fully understand that nor that I could judge efficiency if I did, but would like to see your conclusions!

UPDATE: I hadn't thought about this for a while, but it really only makes sense if the skb->priority class, if any, is consulted and acted upon first, otherwise there is little advantage in using it. That would create a short-circuit from the classifying rule to the appropriate queue, which is clearly what is intended.

Using the CLASSIFY method assumes fixed queues and moves the decision making into the iptables rule set, whereas marking packets moves the decision making into the tc filters and allows a degree of flexibility to alter or rearrange queues entirely within your tc specification. Different choices for different implementations I suppose.

Last edited by astrogeek; 02-14-2021 at 02:31 PM. Reason: Updated
 
  


Reply

Tags
tc, traffic shaping



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Is it possible to filter only LDAP traffic using any traffic shaping tools? coolhydro Linux - Networking 1 08-05-2014 04:20 PM
Traffic Shaping VoiP using TC (Traffic Control) is this working? Nemus Linux - Networking 0 05-16-2011 01:45 PM
Problem with Traffic Shaping and HTTP Traffic. redvivi Linux - Networking 1 11-29-2008 12:23 PM
Traffic shaping (limiting outgoing bandwidth of all TCP-traffic except FTP/HTTP) ffkodd Linux - Networking 3 10-25-2008 12:09 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 06:08 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration