Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Just a thought: I'd guess this depends on the receiving process, not on
the pipe. E.g. it may make sense for awk as the receiver (right-hand side
of a pipe) to do its thing to each line as they dash past, but it certainly
doesn't make sense to start sorting before you've seen all data.
Pure speculation, I never really thought about it in the past ;}
A "pipe" is an inter-process communication (IPC) channel. (It's one of several.)
Basically, a "pipe" presents itself as a file, which a particular process can either read from or write to. The "pipe," then, becomes a buffered communications-mechanism between its "reader" and its "writer," always appearing to both of them as "just a file." But here's the magic...
If "you" are reading from the pipe, but the writer (still exists and...) has not yet written anything (more) to the pipe, "you" will be put to sleep until the writer does write something. Then, you'll be able to read what the writer has just written.
If "you" are writing, and the pipe becomes "full," you will be put to sleep until the pipe is no longer full.
So, with all that having been said, the two processes .. the "reader" and the "writer" .. are free to run on whatever CPUs they can find, as best they can manage, at the sole discretion of the system scheduler.
When you, in the shell, type something like ls | grep foo, you actually cause two processes to be launched: one is ls, which writes its output to its STDOUT, and the other is grep, which reads its input from its STDIN. And... (magic time!) the STDOUT from the one is the STDIN of the other! It's a pipe.
(Please step outside the room while your brain explodes. We've all been there... we don't mind. Now, when you come back into the room, you ought to be saying either "Sweet!" or else, "That is so way k-e-w-e-l!"]
Yeah, those dudes at Bell Labs way back in the 1970's {I was almost-there, but nevermind!} had some pretty mind-blowing ideas...
There are times when I think everything would make a lot more sense if I were on LSD. Then I remember I prefer the purple pills the doctor gives me. *munch* *munch*
In addition to the great explanation by sundialsvcs, there is another thing about unix pipes which is useful but requires care in some circumstances (actually there are a few other such things, but I will only talk about one that relates somewhat to your question): When a process is writing to a pipe and the file descriptor on the “read” end has been closed, the process is sent a SIGPIPE signal. The default action for receiving such a signal (i.e., the action taken unless the program explicitly handles or ignores the signal) is to terminate the program.
As you might imagine, this behavior presents great potential for use. Here is a prototypical example of the kind of situation for which it was intended: suppose you have a really big gzip file, of which you want to read the first few lines to see what is inside. You could do something like this:
Code:
zcat reallybigfile.gz | head
The under normal circumstances, the zcat might take a minute or more to execute, but since only the first ten lines are desired, the execution of zcat is “short-circuited”. What happens is this: A pipe is opened with the “write” end replacing stdout of zcat and the “read” end replacing stdin of head. The process for zcat does its job and starts deflating the file chunk-by-chunk and writing it to stdout. The head utility does its job and reads 10 lines from its stdin and writes them to its stdout. Afterward, it exits (and part of that exiting involves the closing of the “read” end of the pipe). When zcat tries to write additional data to the “broken pipe”, it receives a SIGPIPE signal and terminates without completely deflating the entire file. This saves you a lot of time/cpu cycles, and satisfies “do what I mean”.
As you might also imagine, you can also abuse this functionality, and it might even get you into trouble if you aren’t careful.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.