Using tc or iptables to control jitter (packet delay variation)
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Using tc or iptables to control jitter (packet delay variation)
I would like to use tc to control jitter for an RTP streaming application. I am OK with introducing latency if it is a means to the ends of minimizing the jitter experienced at the streaming application.
I believe I understand how tc-tbf / tc-htb is used to shape traffic, but I am having trouble visualizing how I use these to manage a jitter buffer.
In my application there is basically no contention for the line - at least none that I am creating. Just variability caused by the public internet.
Any kicks in the right direction would be appreciated.
In my application there is basically no contention for the line - at least none that I am creating. Just variability caused by the public internet.
...and that AFAIK is your problem. Packet delay variation caused by Internet weather isn't something managed from either end of the connection. Handling can be suggested using QoS. Being able to ship data over a shortest route is one of the reasons for using a content distribution network.
unSpawn is spot on. You can configure queuing using TC to give preferential treatment to RTP on your system, but the internet is going to take no notice of your QoS policy and for QoS to work all nodes in the transmission path need to support the same policy. unfortunately the internet does not support QoS.
What you can do is increase the jitter buffer on the application itself. If its video streaming it will help but if is bidirectional interactive traffic the increase in latency will very rapidly cause it to become unusable.
Technically you are correct, but if one happened to it would be just because the default config uses some default queueing method that provides weighting related to QoS markings. e.g. if you installed a Cisco router on a sub 2M link you would get wfq as the default queueing mechanism which weights based on IP precedence. Its not really QoS as in an end to end service guarantee.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.