ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
View Poll Results: What is the most suitable IPC for transfering data?
I've been pondering this for a while, and can't decide whether pipes or shared memory have the advantage when transferring data between programs. Here are the deciding features I see with each:
(unnamed) Pipes
Can switch between blocking and non-blocking mode
Don't have to free them when done
Are automatically inherited by children
Must read and write in a linear fashion
Shared Memory
Can store structures
Won't ever block - positive
Can have as many programs read or write to it as you need
Won't ever block - negative: must use semaphores or your own spin-locks
It's possible for it to not be freed even when all programs exit
I think if one doesn't block, it's more advantageous to use shared memory. It can be very useful to block waiting for a pipe, though, but a lot of times that's not feasible. What do the masses think?
ta0kira
PS This poll is regarding the transfer of data only: something that both are actually capable of.
I guess I meant sequential, as in a data stream. Sorry.
One concern I have with pipes is that the reader has no control over how much (and when) the writer writes to a pipe. With shared memory you can wait for the other end to read before writing more, preventing an overflow.
ta0kira
I agree w/ bigearsbilly, I would sockets. Use local sockets (PF_LOCAL) for now (because you are sying local only), since you would still have good speeds (relative to pipes). If you want to allow remote processes to access it, you can very easily allow normal IP sockets as well.
I don't think I gave you a good enough reason. I think shared memory is better used for very tightly coupled processes. If this is the case, then shared memory may be good. However, if you ever plan on decoupling these two processes in the future (maintaining them separately), then implementing them w/ shared memory will probably create headaches later on. Define a way to communicate between the 2 processes, and then write code to handle just that (sending messages and what not).
Most of the places I use shared memory I also have non-blocking pipes, either of which can be selected transparently at run-time. Maybe I will add sockets to the list! I'll have to look into it.
ta0kira
PS What is the easiest way to simulate unnamed pipes between independent processes (i.e. one did not spawn the other)? My current guess would be to use a named pipe then delete it once both ends are open. I can't figure out a way to do that securely, so I might have to stick with shared memory in that particular case.
I've been working with sockets over the last few weeks and they seem to be the most useful IPC for unrelated processes. It looks like unnamed pipes are still best for IPC between forked processes, though. In any case, I've pretty much sworn off shared memory.
A problem I'm running into with pipe IPC lately is timing. I'm using a combination of blocking/non-blocking input with blocking output. I either use select or a blocking input descriptor to wait for input then read in non-blocking mode until a set of data is complete or a read comes back empty. When I send "a lot" of data (relative to previous testing of the application) through a sequence of 5 pipes between 6 processes, not all of the data makes it all the way to the other end unless I space it out with a 5ms nanosleep every 128b or so (lowest reliable latency right now.) Each process parses and analyzes the data to route it to the next process and ensure its validity, so reading isn't constant. I have a build option for testing that completely eliminates blocking input by using spin-locks, but even then I need the 5ms pauses in there.
I tried eliminating the output buffers using setvbuf hoping to make write operations block, but that actually made it worse. Really the only thing that seems to help is adding latency to the write cycles, but that limits me to 200 transfer operations per second. Is there an effective way to block writing (even to a buffer) until reading takes place? The code in question is fairly extensive, but essentially what's happening is the read loops parse as they read, process the input, then start a new read cycle and the write loops are essentially flooding IPC to the point that some data is lost.
ta0kira
PS I'm going to try using fsync after write operations in place of the 5ms latency to see if that works. I'm pretty sure all of my write operations take place in main threads.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.