How to realize periodic receiving packets sent by 1000 processes within 1 millisecond
Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
How to realize periodic receiving packets sent by 1000 processes within 1 millisecond
Dear all,
I want 1000 linux processes to send small packets (packet size is less than 200 bytes) to a single receiving process within one millisecond periodically. I hope all the packets to arrive within the first 500 microseconds, then during the next 500 microseconds, the packets are processed. Again within the next 500 microseconds, another 1000 packets arrive.
The 1000 sending processes might be running on different machines. Each sending process transmits only one packet during the period.
How can I achieve this?
I think there are two main challanges:
1. Sending - How to synchronize these sending processes to guarantee the packects arrive within 500 microseconds? As these processes are running on different machines, it's really challenging.
2. Receiving - How to receive so many packets within 500 microseconds. According to Intel's report, Intel's DPDK tool can receive several million packets within 1 second.
I'm trying to figure out who could spend this much money on a data collection system that isn't classified. I'll guess geological sensors for oil exploration.
Syncing the sensors to that level precludes NTP, etc. You need a GPS clock to trigger an interrupt on each system to capture your sensor and send the data.
Code:
1000 x 200 x 8 = 1.6 Mbits in 0.5 msec
1.6e6 / 0.5e-3 = 3.2e9 bits/sec
So you need at least four 10GBE ports for speed. More likely eight or so to leave yourself some room. That's not the expensive part. The real cost is all the 10G ethernet switches that you need for fan-in. You need to make sure they can buffer all those packets and not drop most of your samples.
The packets come in to your cards and get spread out into separate queues for each CPU core. Intel has some 60-core parts that might be interesting for this. Or you could go the GPU route. You'll need to benchmark how much power you need depending on your processing requirements.
I definitely think that you need to give up on the idea of getting all that data "to one computer." Let alone within one millisecond. Let alone "every time without fail."
Realistically, the clients might need to be able to send those "200-byte messages" in small groups, buffering them on the sending side until they receive an acknowledgment from the host that the data so-far has been successfully received.
All 1,000 computers might be able to send a second's worth of data to the host, and to receive acknowledgment of a successful transfer, within one second, such that no piece of data will arrive more than (say ...) two seconds late. But they do so by reducing the total number of "round trips." Instead of expending one round-trip to transfer only 200 bytes of data, they use that round-trip to send [up to ...] 2,000 bytes, and to confirm that the transfer was successful. (Or, if it was not, to mutually respond to this misfortune in an informed and appropriate fashion since "the show must go on.")
In my humble, you just can't define a "truly reliable" system that is that dependent upon network hardware. These are physical devices operating within the range of sunspots and solar flares ... and cats who like to play with pretty blue wires.
Hmm yeah. 200 bytes are not small packets. You can use a pipe to send the data from process to process and use select() to determine when data is available. However 200,000 bytes is a lot of data and normal pipes are like 65535 bytes unless you rebuild the kernel. Similarly, other resources like a serial USB are worse, they're 4095 unless you redo the driver, but I'm guessing also that you'll find some resource limited by the 65535 limitation unless you redo the kernel as well.
And also there's the fact that Linux is not real-time, it is a multi-purpose operating system and as much as people can claim it comes close to real-time, it doesn't at the micro level. What it can do is emulate real-time, but it has to be well in excess of the required horsepower (processing time wise) so that you can't notice that it occasionally falls behind.
What this really equates to is 2 G bps, two giga BITS per second. Not insurmountable data transfer; however whatever processing you need to do needs to be very fast.
Typically when faced with something like this, I use a true embedded concentrator board to process in the raw data and convert it into a stream which requires much less bandwidth and/or processing power.
Hard to say, it really depends what you're doing with this data. But honestly, rather than just say "I need Linux to do 'this'" it's more correct to look at what your technical goal is and conduct some form of feasibility so as to determine how best to Engineer your solution. What instead this sounds like is something not well thought of, or one of those "Gee it'd be cool if ..." thoughts.
And OH SNAP! I missed MICROSECONDS! I originally thought it was milliseconds, therefore all that in 1/2 second. Yeah, this is either not real, some academic pipe dream, some pointy haired boss thinking Linux or anything might be the end all/be all, but just not a well thought out concept.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.