LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


View Poll Results: What is the most suitable IPC for transfering data?
Shared Memory 1 12.50%
Unnamed Pipes (blocking) 0 0%
Unnamed Pipes (non-blocking) 3 37.50%
Named Pipes (on the file system) 1 12.50%
Actual Files (I'm leaving now) 1 12.50%
Other 2 25.00%
Voters: 8. You may not vote on this poll

Reply
  Search this Thread
Old 11-25-2007, 11:09 AM   #1
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Rep: Reputation: Disabled
Shared Memory vs. Pipes for IPC


I've been pondering this for a while, and can't decide whether pipes or shared memory have the advantage when transferring data between programs. Here are the deciding features I see with each:

(unnamed) Pipes
  • Can switch between blocking and non-blocking mode
  • Don't have to free them when done
  • Are automatically inherited by children
  • Must read and write in a linear fashion
Shared Memory
  • Can store structures
  • Won't ever block - positive
  • Can have as many programs read or write to it as you need
  • Won't ever block - negative: must use semaphores or your own spin-locks
  • It's possible for it to not be freed even when all programs exit
I think if one doesn't block, it's more advantageous to use shared memory. It can be very useful to block waiting for a pipe, though, but a lot of times that's not feasible. What do the masses think?
ta0kira

PS This poll is regarding the transfer of data only: something that both are actually capable of.

Last edited by ta0kira; 11-25-2007 at 11:15 AM.
 
Old 11-26-2007, 03:57 AM   #2
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Slack, Debian, Mint, Puppy, Raspbian
Posts: 3,471

Rep: Reputation: 221Reputation: 221Reputation: 221
pipes also...
maintain separation and encapsulation, unix style, do one job and do it well.

Shared memory is more for crap OSes like DOS, hence all the pollution, cross
infection and lack of security.

I think you'll find most classic unix programming books discouraged threads, shared memeory etc.
 
Old 11-26-2007, 04:14 AM   #3
matthewg42
Senior Member
 
Registered: Oct 2003
Location: UK
Distribution: Kubuntu 12.10 (using awesome wm though)
Posts: 3,530

Rep: Reputation: 65
I think the question is far too general. Some problems will be better solved with one method, some with another.
 
Old 11-27-2007, 05:16 AM   #4
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
I did mention that the use in question was data transfer between processes only...
ta0kira
 
Old 11-27-2007, 05:32 AM   #5
matthewg42
Senior Member
 
Registered: Oct 2003
Location: UK
Distribution: Kubuntu 12.10 (using awesome wm though)
Posts: 3,530

Rep: Reputation: 65
But not the type and size of data, or access requirements (sequential / random access)...
 
Old 11-27-2007, 08:59 PM   #6
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
I guess I meant sequential, as in a data stream. Sorry.

One concern I have with pipes is that the reader has no control over how much (and when) the writer writes to a pipe. With shared memory you can wait for the other end to read before writing more, preventing an overflow.
ta0kira
 
Old 11-28-2007, 03:32 AM   #7
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Slack, Debian, Mint, Puppy, Raspbian
Posts: 3,471

Rep: Reputation: 221Reputation: 221Reputation: 221
I would probably use sockets anyway, not pipes, then it's easy to make it distributed in the future.
 
Old 11-28-2007, 01:48 PM   #8
95se
Member
 
Registered: Apr 2002
Location: Windsor, ON, CA
Distribution: Ubuntu
Posts: 740

Rep: Reputation: 32
I agree w/ bigearsbilly, I would sockets. Use local sockets (PF_LOCAL) for now (because you are sying local only), since you would still have good speeds (relative to pipes). If you want to allow remote processes to access it, you can very easily allow normal IP sockets as well.
 
Old 11-28-2007, 01:52 PM   #9
95se
Member
 
Registered: Apr 2002
Location: Windsor, ON, CA
Distribution: Ubuntu
Posts: 740

Rep: Reputation: 32
I don't think I gave you a good enough reason. I think shared memory is better used for very tightly coupled processes. If this is the case, then shared memory may be good. However, if you ever plan on decoupling these two processes in the future (maintaining them separately), then implementing them w/ shared memory will probably create headaches later on. Define a way to communicate between the 2 processes, and then write code to handle just that (sending messages and what not).
 
Old 12-01-2007, 11:07 AM   #10
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
Most of the places I use shared memory I also have non-blocking pipes, either of which can be selected transparently at run-time. Maybe I will add sockets to the list! I'll have to look into it.
ta0kira

PS What is the easiest way to simulate unnamed pipes between independent processes (i.e. one did not spawn the other)? My current guess would be to use a named pipe then delete it once both ends are open. I can't figure out a way to do that securely, so I might have to stick with shared memory in that particular case.

Last edited by ta0kira; 12-01-2007 at 11:10 AM.
 
Old 12-03-2007, 04:12 AM   #11
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Slack, Debian, Mint, Puppy, Raspbian
Posts: 3,471

Rep: Reputation: 221Reputation: 221Reputation: 221
well, sockets would be it.
they are very fast on the same machine with the advantage that the
process can also be distributed if you so wish.

I usually use INET sockets for this sort of stuff, for the above reasons,
though you can use unix domain sockets.

just try 'em, you'll like it
 
Old 12-23-2007, 04:56 PM   #12
ta0kira
Senior Member
 
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078

Original Poster
Rep: Reputation: Disabled
I've been working with sockets over the last few weeks and they seem to be the most useful IPC for unrelated processes. It looks like unnamed pipes are still best for IPC between forked processes, though. In any case, I've pretty much sworn off shared memory.

A problem I'm running into with pipe IPC lately is timing. I'm using a combination of blocking/non-blocking input with blocking output. I either use select or a blocking input descriptor to wait for input then read in non-blocking mode until a set of data is complete or a read comes back empty. When I send "a lot" of data (relative to previous testing of the application) through a sequence of 5 pipes between 6 processes, not all of the data makes it all the way to the other end unless I space it out with a 5ms nanosleep every 128b or so (lowest reliable latency right now.) Each process parses and analyzes the data to route it to the next process and ensure its validity, so reading isn't constant. I have a build option for testing that completely eliminates blocking input by using spin-locks, but even then I need the 5ms pauses in there.

I tried eliminating the output buffers using setvbuf hoping to make write operations block, but that actually made it worse. Really the only thing that seems to help is adding latency to the write cycles, but that limits me to 200 transfer operations per second. Is there an effective way to block writing (even to a buffer) until reading takes place? The code in question is fairly extensive, but essentially what's happening is the read loops parse as they read, process the input, then start a new read cycle and the write loops are essentially flooding IPC to the point that some data is lost.
ta0kira

PS I'm going to try using fsync after write operations in place of the 5ms latency to see if that works. I'm pretty sure all of my write operations take place in main threads.

Last edited by ta0kira; 12-23-2007 at 10:24 PM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Difference between resident memory,shared memory and virtual memory in system monitor mathimca05 Linux - Newbie 1 11-11-2007 05:05 AM
IPC-shared memory hegdeshashi Linux - Server 5 02-27-2007 04:46 AM
IPC Memory Share - C Program - Why not exiting for(;;) ?? brunnopessoa Programming 4 09-05-2004 10:27 PM
Standard Way To Share Memory Among Processes? Sys-V IPC? overbored Programming 1 06-21-2003 02:33 PM
IPC Shared Memory support in kernel? stevho Linux - General 1 01-17-2002 08:48 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 11:09 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration