[SOLVED] How does '&' make a process run in background ?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Hmmm, I have never worked with job control options. I know how to use $? to check the exit status of a just-run command. But what I had been looking for awhile back, was a way to background a job (in a script, mind you), go on and do other things, and then go back and check the status of the job backgrounded earlier.
Specifically, what my routine does is to run 'make -n install-rule' to check whether install-rule is a valid Makefile rule. If it is not, the command exits very quickly. But, if it is a valid Makefile rule, then the command may exit quickly, or could run for a pretty long time. I'd like to be able to stop the command if it doesn't fail.
What I came up with was this:
Code:
$MAKE_COMMAND -n $INSTALL_RULE > "$LOG_DIR"/$NAME-test-make-install.log 2> /dev/null &
# so we can get the process pid
PID=$!
# sleep a trivial amount of time
sleep 1
# if the directory /proc/$PID exists, then the process is still running
if [[ -d /proc/$PID ]] ; then
# we are in a 'race' condition anyway, so silence any error output
# interrupt the process first to avoid nasty error output
(kill -INT $PID) &> /dev/null
# (kill -KILL $PID) &> /dev/null
STATUS=SUCCESS
else
# otherwise, the process either failed or succeeded very quickly
# so run it again and check the exit status directly
# $MAKE_COMMAND -n "$INSTALL_RULE" > "$LOG_DIR"/$NAME-test-make-install.log &> /dev/null
$MAKE_COMMAND -n "$INSTALL_RULE" &> "$LOG_DIR"/$NAME-test-make-install.log
ERR=$?
case $ERR in
0) STATUS=SUCCESS ;;
*) STATUS=FAILED ;;
esac
fi
But I always wondered if there was a way to do it with process control -but can you even use job-control in a script?
This is kind of related. I have wondered -is there a way to retrieve the exit status of a process which was backgrounded?
There sure is. Just before the shell gives you a prompt (for any reason, for example backgrounding a job that has been running in the foreground), it reports the exit status for any background process that has finished.
So that's how you get it, but it's only available once. Once the shell has reported the status, that status is gone.
There sure is. Just before the shell gives you a prompt (for any reason, for example backgrounding a job that has been running in the foreground), it reports the exit status for any background process that has finished.
So that's how you get it, but it's only available once. Once the shell has reported the status, that status is gone.
I wonder if there is any way to capture it from a running script that launched the background job. Perhaps the script command ... ?
Regarding capturing the exit status of background tasks, this is one approach using $? and temp files:
Code:
#!/bin/bash
# Use subshells, $?, and temp files to see return status of background jobs.
if [ -e tmpfile1 ] ; then
rm tmpfile1
fi
if [ -e tmpfile2 ] ; then
rm tmpfile2
fi
(invalidcmd ; echo $? > tmpfile1) &> /dev/null &
(whoami ; echo $? > tmpfile2) &> /dev/null &
echo "Doing other stuff now."
sleep 1
echo "The return value of the first command was `cat tmpfile1`."
echo "The return value of the second command was `cat tmpfile2`."
Just remember that the child processes will still have to finish before you can check their exit status! If it takes longer for the child process to finish than it does for your code to get to the "checking" step, there will be problems because the temp files do not exist yet:
Code:
#!/bin/bash
# Use subshells, $?, and temp files to see return status of background jobs -- and watch it break.
# These jobs take too long to finish:
(sleep 2 ; doodles &> /dev/null ; echo $? > tmpfile1) > /dev/null &
(sleep 2 ; whoami &> /dev/null ; echo $? > tmpfile2) > /dev/null &
echo "Doing other stuff now."
# One error for each 'cat' and 'rm', and blank values:
echo "The return value of the first command was `cat tmpfile1`."
echo "The return value of the second command was `cat tmpfile2`."
rm tmpfile1
rm tmpfile2
If you don't need to have these variables available in the main part of your program, you can simplify this by just writing your command(s) into the subshell. In the example below, I chose to call a function.
This also prevents the possibility of arriving at your "checking" code before the subshell is ready.
Code:
#!/bin/bash
# Cleaner code calling a function as the last command in the subshell
cmdfinished() {
# Takes 2 arguments, "command name" and "return value"
echo "The return value of command $1 was $2."
}
(sleep 1 ; invalidcmd &> /dev/null ; cmdfinished invalidcmd $?) &
(sleep 1 ; whoami &> /dev/null ; cmdfinished whoami $?) &
echo "Doing other stuff now without worrying about when the results come in."
You could probably also approach it by trapping signals and using 'kill'.
This is kind of related. I have wondered -is there a way to retrieve the exit status of a process which was backgrounded?
I'd think wait would tell you since its C counterpart is the standard way to obtain that info.
Kevin Barry
edit: I had a chance to check it, and the wait built-in is the way to do it provided you have a pid or job number. You might try this to get the exit status of all jobs:
Code:
while read job; do wait $job; echo $job $?; done < <( jobs -p )
The thing is though, that I want to kill the process if it does not fail quickly. The routine tests the 'make -n install' command, which sometimes takes a long time to run -if it fails it fails quickly. Still, thanks for some tips on how to approach similar situations.
Hmmm, I have never worked with job control options. I know how to use $? to check the exit status of a just-run command. But what I had been looking for awhile back, was a way to background a job (in a script, mind you), go on and do other things, and then go back and check the status of the job backgrounded earlier.
Specifically, what my routine does is to run 'make -n install-rule' to check whether install-rule is a valid Makefile rule. If it is not, the command exits very quickly. But, if it is a valid Makefile rule, then the command may exit quickly, or could run for a pretty long time. I'd like to be able to stop the command if it doesn't fail.
What I came up with was this:
[...]
But I always wondered if there was a way to do it with process control -but can you even use job-control in a script?
I think the following does what you want, although to suppress the process killed error message, the redirect needs to go before the last echo for reasons I don't understand (see blue text).
Code:
#!/bin/bash
set -m # enable job control
TIME=${1:-10}
EXIT_STATUS=${2:-0}
SCRIPT_PID=$$
STATUS=FAILED
FINISHED=NO
trap 'wait $PID && STATUS=SUCCESS; FINISHED=YES' SIGUSR1
# sleep $TIME is instead of make, for easier testing
(sleep $TIME; kill -s SIGUSR1 $SCRIPT_PID; exit $EXIT_STATUS) &
PID=$!
# sleep a trivial amount of time
sleep 1
if [ "$FINISHED" = NO ] ; then
# we are in a 'race' condition anyway, so silence any error output
# interrupt the process first to avoid nasty error output
# except silencing here doesn't work now
kill -KILL $PID
STATUS=SUCCESS
fi
# have to redirect before this echo otherwise it doesn't work!?
exec 2>/dev/null
echo "STATUS: $STATUS; FINISHED: $FINISHED"
Example usage:
Code:
~/tmp$ TIMEFORMAT='real %R'
~/tmp$ time ./jobs.sh 0 0 # finish quickly with successful exit
STATUS: SUCCESS; FINISHED: YES
real 1.026
~/tmp$ time ./jobs.sh 0 1 # finish quickly with failure on exit
STATUS: FAILED; FINISHED: YES
real 1.024
~/tmp$ time ./jobs.sh 4 0 # take a while to finish
STATUS: SUCCESS; FINISHED: NO
real 1.024
Last edited by ntubski; 12-22-2009 at 10:14 AM.
Reason: typos
Well, a big thanks for that! I'll give that a shot and see if it does what I'm after -even though the code I have is working. It just seems inelegant to me -if for no other reason than the arbitrary sleep time which might not work for other commands or on a faster/slower machine. Thanks again.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.