[SOLVED] receiving UDP packets - where does the latency come from?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
receiving UDP packets - where does the latency come from?
Hi network experts,
I have a microcontroller sending a 500 bytes UDP packet every millisecond to my PC (direct cable connection, no other device or software on the dedicated eth2 interface).
With tcpdump I see the packets arriving with latencies (time between two packets) of 1000µs (+/- 11µs) - this is what I want.
BUT, my C program sees latencies between 10µs and 3500µs (when executed as a user). When I run it as root with "nice -n -20" the latencies get better - between 10µs and 2000µs, but still not good.
Most of the measured latencies are still around 1000µs, but there are packets that arrive with a hefty delay. If that delayed packet is received, my program receives immediately (10µs) the next packet.
So, what is the reason for the delay of the packets? I expected it to be the Linux scheduler, but with "nice -n -20" I hoped it would be nicer to me.
Any idea how to get rid of the delay?
The program uses blocking I/O recvfrom() to receive the UDP packet.
The machine is a 2x dual core Opteron running 2.6.34-gentoo-r6-cs x86_64
An example of how Linux is not a real-time OS. For that matter, neither TCP/IP or ethernet are deterministic. If you need hard real-time, consider an OS that is built for it, such as vxWorks.
--- rod.
tcpdump uses packet_mmap interface. Try that out with a realtime priority+scheduler config. Also, you should make sure that your system doesn't have the NO_HZ option enabled.
Thank you very much for the hints.
Especially thank you orgcandman. packet_mmap seems to be exactly the key word I was looking for.
Changing the operating system is no option because there is already running some special real-time stuff on other cpu cores (there are digital control loops with response times well below 10µs - the code is locked to it's own cpu core and "shut down" for the Linux system). I can't do this for the UDP packet receiving code because I want to use sockets and other kernel functionality.
The worst latency for the packets went down from 1000µs to 500µs by setting my process to real-time scheduling priority using the code below.
But, I don't understand the PF_PACKET raw packet sending/receiving - that I need for packet_mmap :-(
I will open a new thread for this.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.