Just annotations of little "how to's", so I know I can find how to do something I've already done when I need to do it again, in case I don't remember anymore, which is not unlikely. Hopefully they can be useful to others, but I can't guarantee that it will work, or that it won't even make things worse.
Rough script that waits until processes finish their businesses, then exit
Posted 06-15-2015 at 04:17 PM by the dsc
The processes are given as a single variable, thus one must use quotes if there's more than one process to wait for.
Usage example:
./waitprocs "lame avconv sox tar" && self-destruct
My previous attempt with a "counting" system didn't work, so I've tried a different approach. For any given process, it will check if it's running, if it is, it creates a temporary file with its name on a temporary work folder; if it isn't, it tries to remove that file.
An outer loop lists this work folder, and if it's empty for the third time in a row, the script exits.
Probably there's a much nicer way to do that without resorting to such crude methods of blank temp files and ls, though.
To use this with programs such as imagemagick -- well, actually, with imagemagick specifically, I can't think of another program that does that -- you'll need to use the "hidden" process name, which isn't the name you use to issue the command itself, but things like "convert.im6".
I'm not so sure the sleep time within the "for prog" loop is a good idea. Perhaps it's better to have a larger sleep time on the "ls" loop and more runs before exitting.
Usage example:
./waitprocs "lame avconv sox tar" && self-destruct
My previous attempt with a "counting" system didn't work, so I've tried a different approach. For any given process, it will check if it's running, if it is, it creates a temporary file with its name on a temporary work folder; if it isn't, it tries to remove that file.
An outer loop lists this work folder, and if it's empty for the third time in a row, the script exits.
Probably there's a much nicer way to do that without resorting to such crude methods of blank temp files and ls, though.
To use this with programs such as imagemagick -- well, actually, with imagemagick specifically, I can't think of another program that does that -- you'll need to use the "hidden" process name, which isn't the name you use to issue the command itself, but things like "convert.im6".
I'm not so sure the sleep time within the "for prog" loop is a good idea. Perhaps it's better to have a larger sleep time on the "ls" loop and more runs before exitting.
Code:
#!/bin/bash workdir="/dev/shm/waiter$$" mkdir $workdir trap "rm $workdir/*.rng && rmdir $workdir" SIGTERM SIGINT SIGKILL own=$$ runs=0 while true ; do for prog in $1 ; do [ ! -z "$(pgrep -x $prog | grep -v $own )" ] && touch $workdir/$prog.rng 2>/dev/null || rm $workdir/$prog.rng 2>/dev/null done sleep 1 done & while [ $runs -lt 3 ] ; do echo $runs sleep 2 ls $workdir/*.rng 2>/dev/null && runs=0 || runs=$((runs+1)) done rmdir $workdir exit 0
Total Comments 2
Comments
-
Hi, your choice of work directory being /dev/shm... is strange. The only places I consider safe to write user data is /home/$USER or /tmp. /dev is, let's call it a reserved directory, don't put stuff there as a general rule.
Posted 06-17-2015 at 12:32 AM by rhubarbdog -
I think that /dev/shm is quite handy and more appropriate for some temporary files, even more so for temporary files with no actual data, like in this case. It's both faster in itself (at least than hdd) and by reducing writes to the hdd it also will leave the hdd free to only work on writes that really matter. My old hdds are unbearably slow, so, even if I couldn't really measure the difference (and I never rally bothered to), I'd avoid writing things that don't really need to be written, just on principle.
Or, almost that. There are a few people who actually have scripts that will copy the whole "./config/webbrowser/profile" folder to /dev/shm while it's in use, and update it back to the hdd with rsync when they close the browser. But I don't do that, mostly because I'm also somewhat short on RAM. I thikn arch linux even has this script packaged.
But I speak as a more or less single-user (non root) desktop perspective, for server/real system administrators, the thing may really be very, very wrong, for some good reason I can't really imagine right now.Posted 06-23-2015 at 09:58 PM by the dsc