can you do threading or multiple processes in a shell script?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
can you do threading or multiple processes in a shell script?
I've got a shell script that converts a directory of images to .ppm file format & then makes a quicktime out of them. The conversion process sometimes takes a long time (sometimes there's a resize or a composite done at the same time). Being that I have a multi-proc system with an smp kernel, is there a way I can convert two images at once using both processors... some sort of fork or something, but in a shell script (preferably tcsh)?
I thought about just running one process in the background, but there's no way of knowing when the background process gets done. I could see lots of background processes piling up & grinding the machine to a halt.
I wonder if the following will do the trick: create a fifo, send the filenames to be processed to that fifo, and let the two processes that are supposed to do the work retrieve the filenames from the pipe. Try this (Bash-)script to see how it works:
Code:
#! /bin/bash
( mkfifo fifo && ls ~ > fifo && rm fifo )&
for i in 1 2
do
(while read; do echo "Process $i received $REPLY."; sleep 1; done) < fifo &
done
The question is whether the load will be distributed among the processors.
Maybe someone can help me out reading spirit receiver's latter example... It appears that some file descriptor (3) gets the output of "ls ~", then, through the read command, gets a single piece of that argument and processes it. Is that about right?
I don't see any thread/process management here... in other words, if there was no sleep command and if it took maybe 2-3 seconds to execute a single command, wouldn't this spin off so many processes it would eventually kill the machine?
But then, the loop spawns exactly two processes. You could use
Code:
for i in $(seq 1 10)
to spawn 10 processes instead, but the number is fixed and doesn't depend on the loop's content.
Each process stays alive until the file descriptor is empty. And it reads a single line every second, that's what we needed the "sleep 1" for.
But then, the loop spawns exactly two processes. You could use
Code:
for i in $(seq 1 10)
to spawn 10 processes instead, but the number is fixed and doesn't depend on the loop's content.
Each process stays alive until the file descriptor is empty. And it reads a single line every second, that's what we needed the "sleep 1" for.
... and therein lies the question... If there was no sleep statment, this would never stop executing 2 processes at a time, correct? In other wrods, is there any way to guarantee that both processes have finished before we move on to the next iteration of the loop?
The reason I ask is that in the original problem, I stated that I'm converting images & want to do more than one at a time. If a sleep is requried, then those conversions where each file takes less than 1/2 a second to convert would perform worse by this method. Those conversions where each file takes more than one second to convert would end up with extra processes in queue several iterations - up to the point where the machine would be swapped to the point of no return after many files are processed.
In each iteration, a single process is launched in the background. That's what the following does.
Code:
( ... ) &
So there a two background processes. After these are launched, the script exits.
Each of these two processes is supposed to get its standard input from file hande 3, so we use the following instead:
Code:
( ... ) <&3 &
And what each process does is this:
Code:
while read; do echo "Process $i received $REPLY."; sleep 1; done
If you remove "sleep 1", you'll see that each process will be much quicker at picking lines from the file handle, it might well be that one of them won't even get a change to retrieve anything before the handle is exhausted. But this won't change the number of processes.
Does the following give you a hint?
Code:
#! /bin/bash
exec 3< <( ls ~ )
for i in 1 2
do
echo "Now I'm executing the loop."
(while read; do echo "Process $i received $REPLY."; sleep 1; done) <&3 &
done
echo "And now we're finished."
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.