Artificially delaying or buffering incoming network traffic for 10 or 20 ms
Linux - NetworkingThis forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Artificially delaying or buffering incoming network traffic for 10 or 20 ms
We are running a Java-based trading application, and there are certain periods where we want to prioritize outgoing network traffic as much as possible for about 10 - 20 ms. Is there a way to temporarily buffer all *incoming* network traffic during a short time period, either on the network card or via a process or buffer on our RHEL 5.8 box?
The rationale behind this is that the incoming network traffic spikes during this same period, and the application which is processing this traffic ends up stealing CPU cycles and/or locking the process we are trying to prioritize. We do not have fine-grained control over the application treating the incoming network traffic.
We're on a 1 Gbps connection so a buffer of about 1 MB should be sufficient. We would prefer not dropping the incoming traffic and requesting retransmission as this would increase load on our network during quite busy periods.
Distribution: Slackware 10.2 & Windows 98 & Windows XP Pro & ...
Posts: 30
Rep:
Hi,
If I understand correctly you'd like to rate-limit requests (possibly leading to delays during bursts), instead of actually delay traffic at all times.
Linux indeed provides "queueing disciplines" (or qdiscs) on outbound interfaces to rate-limit traffic. Using ifb you can attach a virtual "outbound" rate-limiter to inbound traffic.
The link you provided does explain most of the things you need. I've built a few set-ups in the past using tc qdiscs and htb. One thing I recall is that the user interface is "hard" (at best).
Note that in your use-case, just "re-nicing" the process seems like a more logical approach no? (alot easier!)
Thanks arre, a combination of qdics and ifb sounds like it could work, although it still isn't clear to me how to implement it. Ideally we would like completely stop several applications from receiving inbound traffic for about 20 ms at a certain point in time, and catch up afterwards. We have no problem with *all* inbound traffic being "paused" for this period, buffering up somewhere (inside the ifb application?)
Re-nicing would be a better solution, however this is a third-party application that we do not have direct control over.
Distribution: Slackware 10.2 & Windows 98 & Windows XP Pro & ...
Posts: 30
Rep:
Hi,
Regarding the renicing, can you confirm that you do have permissions to actually set up qdiscs on interfaces (normally requires root access), but really do not have access to the application scheduling rights? (if you have root, you can renice every application..).
Yes Arnout, this is our own co-located box. We're pretty nervous making any deep OS changes to it though, so are thinking through the alternatives - another possibility might be to turn on Ethernet flow control for 20 ms, much less invasive, although not sure if the machines at the exchanges will play along. We also need very fine granularity to this procedure, the timing of the filter / buffer needs to have jitter of approximately 1 ms, and what I am seeing so far is that these services operate at a much coarser granularity, I think I saw one was only being polled 100 / second, so 10 ms granularity.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.