LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 06-21-2006, 08:53 PM   #1
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Rep: Reputation: 51
can you do threading or multiple processes in a shell script?


I've got a shell script that converts a directory of images to .ppm file format & then makes a quicktime out of them. The conversion process sometimes takes a long time (sometimes there's a resize or a composite done at the same time). Being that I have a multi-proc system with an smp kernel, is there a way I can convert two images at once using both processors... some sort of fork or something, but in a shell script (preferably tcsh)?

I thought about just running one process in the background, but there's no way of knowing when the background process gets done. I could see lots of background processes piling up & grinding the machine to a halt.
 
Old 06-22-2006, 03:31 AM   #2
spirit receiver
Member
 
Registered: May 2006
Location: Frankfurt, Germany
Distribution: SUSE 10.2
Posts: 424

Rep: Reputation: 33
I wonder if the following will do the trick: create a fifo, send the filenames to be processed to that fifo, and let the two processes that are supposed to do the work retrieve the filenames from the pipe. Try this (Bash-)script to see how it works:
Code:
#! /bin/bash

( mkfifo fifo && ls ~ > fifo && rm fifo )&

for i in 1 2
do
  (while read; do echo "Process $i received $REPLY."; sleep 1; done) < fifo &
done
The question is whether the load will be distributed among the processors.
 
Old 06-22-2006, 03:36 AM   #3
spirit receiver
Member
 
Registered: May 2006
Location: Frankfurt, Germany
Distribution: SUSE 10.2
Posts: 424

Rep: Reputation: 33
Actually, you won't need a named pipe, the following would do as well:
Code:
#! /bin/bash

exec 3< <( ls ~ )

for i in 1 2
do
  (while read; do echo "Process $i received $REPLY."; sleep 1; done) <&3 &
done
 
Old 06-22-2006, 08:21 AM   #4
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
neat spirit, very neat.
 
Old 06-22-2006, 12:01 PM   #5
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Hey!! that's pretty cool. Not sure if I understand it yet, but...

Thanks!

Last edited by BrianK; 06-22-2006 at 12:19 PM.
 
Old 08-07-2006, 02:25 PM   #6
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
reviving an ol thread...

I'm finally getting a chance to get back on this.

Maybe someone can help me out reading spirit receiver's latter example... It appears that some file descriptor (3) gets the output of "ls ~", then, through the read command, gets a single piece of that argument and processes it. Is that about right?
I don't see any thread/process management here... in other words, if there was no sleep command and if it took maybe 2-3 seconds to execute a single command, wouldn't this spin off so many processes it would eventually kill the machine?

Last edited by BrianK; 08-07-2006 at 02:29 PM.
 
Old 08-07-2006, 03:03 PM   #7
spirit receiver
Member
 
Registered: May 2006
Location: Frankfurt, Germany
Distribution: SUSE 10.2
Posts: 424

Rep: Reputation: 33
You're right about that file descriptor.

But then, the loop spawns exactly two processes. You could use
Code:
for i in $(seq 1 10)
to spawn 10 processes instead, but the number is fixed and doesn't depend on the loop's content.
Each process stays alive until the file descriptor is empty. And it reads a single line every second, that's what we needed the "sleep 1" for.
 
Old 08-07-2006, 03:57 PM   #8
BrianK
Senior Member
 
Registered: Mar 2002
Location: Los Angeles, CA
Distribution: Debian, Ubuntu
Posts: 1,334

Original Poster
Rep: Reputation: 51
Quote:
Originally Posted by spirit receiver
You're right about that file descriptor.

But then, the loop spawns exactly two processes. You could use
Code:
for i in $(seq 1 10)
to spawn 10 processes instead, but the number is fixed and doesn't depend on the loop's content.
Each process stays alive until the file descriptor is empty. And it reads a single line every second, that's what we needed the "sleep 1" for.
... and therein lies the question... If there was no sleep statment, this would never stop executing 2 processes at a time, correct? In other wrods, is there any way to guarantee that both processes have finished before we move on to the next iteration of the loop?

The reason I ask is that in the original problem, I stated that I'm converting images & want to do more than one at a time. If a sleep is requried, then those conversions where each file takes less than 1/2 a second to convert would perform worse by this method. Those conversions where each file takes more than one second to convert would end up with extra processes in queue several iterations - up to the point where the machine would be swapped to the point of no return after many files are processed.

Am I correct in saying this?

Last edited by BrianK; 08-07-2006 at 03:59 PM.
 
Old 08-07-2006, 04:40 PM   #9
spirit receiver
Member
 
Registered: May 2006
Location: Frankfurt, Germany
Distribution: SUSE 10.2
Posts: 424

Rep: Reputation: 33
The loop is only executed twice, immediately.

In each iteration, a single process is launched in the background. That's what the following does.
Code:
( ... ) &
So there a two background processes. After these are launched, the script exits.
Each of these two processes is supposed to get its standard input from file hande 3, so we use the following instead:
Code:
( ... ) <&3 &
And what each process does is this:
Code:
while read; do echo "Process $i received $REPLY."; sleep 1; done
If you remove "sleep 1", you'll see that each process will be much quicker at picking lines from the file handle, it might well be that one of them won't even get a change to retrieve anything before the handle is exhausted. But this won't change the number of processes.

Does the following give you a hint?
Code:
#! /bin/bash

exec 3< <( ls ~ )

for i in 1 2
do
  echo "Now I'm executing the loop."
  (while read; do echo "Process $i received $REPLY."; sleep 1; done) <&3 &
done
echo "And now we're finished."
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Shell-Script for killing processes oulevon Programming 4 02-14-2006 10:49 AM
shell script to kill processes running on port number varunbihani Linux - Newbie 13 12-06-2005 08:46 AM
shell script to read ps -e output and determine process double processes. dr_zayus69 Programming 1 09-21-2005 05:37 PM
shell script to kill all processes on specified port varunbihani Linux - General 1 04-19-2005 05:39 AM
Need help with shell script - renaming multiple files NiallC Linux - Newbie 25 07-04-2004 10:45 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 02:25 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration