Help answer threads with 0 replies.
Go Back > Blogs > linux-related notes
User Name


Just annotations of little "how to's", so I know I can find how to do something I've already done when I need to do it again, in case I don't remember anymore, which is not unlikely. Hopefully they can be useful to others, but I can't guarantee that it will work, or that it won't even make things worse.
Rate this Entry

Rough script that waits until processes finish their businesses, then exit

Posted 06-15-2015 at 04:17 PM by the dsc

The processes are given as a single variable, thus one must use quotes if there's more than one process to wait for.

Usage example:

./waitprocs "lame avconv sox tar" && self-destruct

My previous attempt with a "counting" system didn't work, so I've tried a different approach. For any given process, it will check if it's running, if it is, it creates a temporary file with its name on a temporary work folder; if it isn't, it tries to remove that file.

An outer loop lists this work folder, and if it's empty for the third time in a row, the script exits.

Probably there's a much nicer way to do that without resorting to such crude methods of blank temp files and ls, though.

To use this with programs such as imagemagick -- well, actually, with imagemagick specifically, I can't think of another program that does that -- you'll need to use the "hidden" process name, which isn't the name you use to issue the command itself, but things like "convert.im6".

I'm not so sure the sleep time within the "for prog" loop is a good idea. Perhaps it's better to have a larger sleep time on the "ls" loop and more runs before exitting.



mkdir $workdir

trap "rm $workdir/*.rng && rmdir $workdir"  SIGTERM SIGINT SIGKILL



while true ; do

  for prog in $1 ; do

  [ ! -z "$(pgrep -x $prog | grep -v $own )" ] && touch $workdir/$prog.rng 2>/dev/null || rm $workdir/$prog.rng 2>/dev/null

  sleep 1
done &

while [ $runs -lt 3 ] ; do
echo $runs

sleep 2

ls $workdir/*.rng 2>/dev/null && runs=0 || runs=$((runs+1))

rmdir $workdir

exit 0
Posted in Uncategorized
Views 1025 Comments 2
« Prev     Main     Next »
Total Comments 2


  1. Old Comment
    Hi, your choice of work directory being /dev/shm... is strange. The only places I consider safe to write user data is /home/$USER or /tmp. /dev is, let's call it a reserved directory, don't put stuff there as a general rule.
    Posted 06-17-2015 at 12:32 AM by rhubarbdog rhubarbdog is offline
  2. Old Comment
    I think that /dev/shm is quite handy and more appropriate for some temporary files, even more so for temporary files with no actual data, like in this case. It's both faster in itself (at least than hdd) and by reducing writes to the hdd it also will leave the hdd free to only work on writes that really matter. My old hdds are unbearably slow, so, even if I couldn't really measure the difference (and I never rally bothered to), I'd avoid writing things that don't really need to be written, just on principle.

    Or, almost that. There are a few people who actually have scripts that will copy the whole "./config/webbrowser/profile" folder to /dev/shm while it's in use, and update it back to the hdd with rsync when they close the browser. But I don't do that, mostly because I'm also somewhat short on RAM. I thikn arch linux even has this script packaged.

    But I speak as a more or less single-user (non root) desktop perspective, for server/real system administrators, the thing may really be very, very wrong, for some good reason I can't really imagine right now.
    Posted 06-23-2015 at 09:58 PM by the dsc the dsc is offline


All times are GMT -5. The time now is 11:13 PM.

Main Menu
Write for LQ is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration