Quote:
Originally Posted by divyashree
Can anyone suggest me to restrict this ??
|
Use a subshell, and background that. After the loop, use
wait to make sure all subshells complete before the script exits.
Here is an example:
Code:
#!/bin/bash
for DIR in 1 2 3 4 ; do
(
# Abort if $DIR/ cannot be entered into.
cd "$DIR/" || exit $?
LOCKDIR=".lock"
COMPLETE=".complete"
# Create the lock directory. This is reasonably atomic.
if ! mkdir -m 0700 "$LOCKDIR" &>/dev/null ; then
MSG="$DIR: Already being worked on."
echo "$MSG" >&2
exit 0
fi
# We have the lock directory. Automatically remove it, when this subshell exits.
trap "rmdir '$LOCKDIR'" EXIT
# Exit if this directory has been completed already.
if [ -e "$COMPLETE" ]; then
MSG="$DIR: Already completed on $(< "$COMPLETE")." 2>/dev/null
echo "$MSG" >&2
exit 0
fi
# Redirect output and error to files.
exec > standard.out
exec 2> standard.err
#
# Work starts here. To abort, use exit.
#
# Run "./firstscript.sh", abort if it fails
"./firstscript.sh" || exit $?
# Run "./secondscript.sh", abort if it fails
"./secondscript.sh" || exit $?
#
# All work done.
#
# Mark the directory complete using current date.
date '+%Y-%m-%d %T %z' > "$COMPLETE"
) &
done
wait
The above uses directory locks, since they're reasonably reliable on all systems. There are better alternatives, but they have stricter requirements, too; you didn't supply enough details for me to suggest anything better.
Directory locking is atomic on local filesystems, and usually on NFS. It is not atomic on NFS if there is packet loss or the server restarts during the operation. The
flock command from util-linux package uses an interface which fails for mixed local and remote lockers, and is of course Linux-specific, so I didn't want to suggest that.
Hard links (using
ln and
stat -c %h) using an unique name composed of PID and hostname (
.lock.$(hostname -s).$$) and checking the link count for a permanently existing lock file, works for both local and remote filesystems, but requires POSIX hard link semantics. Therefore, it will not work on ISO-9660 (CD-ROMs), FAT (MSDOS, USB sticks), or NTFS (Windows) filesystems for example.
Trivial lock files, using test-then-create-with-
touch are very prone to problems. For example, another process might create the file between the check and the
touch. Do not rely on those.
When outputting a message to the common standard error, I like to first compose the error message so that
echo is more likely to output it in one chunk, and not mix messages from different sources. It's not perfect, but good enough for me.
The actual tasks done in the subshell have their standard output and standard error redirected to files in the work directory. This is a good idea, because it avoids the problem of mixing messages from different inputs. You could direct both to the same file (using
exec > log 2>&1) if you want.
Instead of running the scripts
./firstscript.sh and
./secondscript.sh in the job directory, you could do any other tasks. Just remember to
exit to abort the task, and a subsequent command will retry it from the beginning. A completed job will be marked in a file (with a timestamp in the file), so the loop will not try to work on those. Note, however, that the lock is taken first before checking the file, to avoid race conditions. (Always lock first!)