LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Newbie (https://www.linuxquestions.org/questions/linux-newbie-8/)
-   -   Avoid script running multiple times by file lock (https://www.linuxquestions.org/questions/linux-newbie-8/avoid-script-running-multiple-times-by-file-lock-763606/)

edenCC 10-21-2009 11:51 PM

Avoid script running multiple times by file lock
 
Sometimes we need a single instance of a script to run at a time. Meaning, the script itself should detects whether any instances of himself are still running and act accordingly.

When multiple instances of one script running, it’s easy to cause problems. I’ve ever seen that about 350 instances of a status checking script running there without doing anything, but eat lots of system resource.

I create a simple blog post on my website which show some implementation examples by Perl, Python and Shell.

It's simple to implement it in Bash Shell:

Code:

# This is to examine whether the lockfile existed
[ -f "${0}.lock" ] && exit -1
# Create the lock file
lockfile ${0}.lock
sleep 40
# Release the lock file manually
rm -f ${0}.lock


An updated version:

Code:

# Create the lock file
lockfile -r0 ${0}.lock && {
# Your code goes here!
}
# Release the lock file manually
rm -f ${0}.lock


Jerre Cope 10-22-2009 09:58 PM

ps -ef | grep -q "myscriptname" | grep -v grep && exit

edenCC 10-22-2009 10:08 PM

Quote:

Originally Posted by Jerre Cope (Post 3729311)
ps -ef | grep -q "myscriptname" | grep -v grep && exit

It's not cost effective, and will use more resource.

edenCC 10-22-2009 10:10 PM

One of my friends gave me another solution like this:


Code:

lock_on()
{
    local f=$1
    local freefd=6  ## do not use fd 5

    ## make sure the file be there
    mkdir -p "$( dirname $f )"
    touch "$f"

    ## find a free fd
    while (( freefd <= 9 )); do
        [[ -L /dev/fd/$freefd ]] || break
        (( freefd++ ))
    done

    (( freefd == 10 )) && return 1

    ## open the lock file
    eval "exec $freefd< \"$f\""
}

is_locked()
{
    local f=$1

    fuser "$f" &> /dev/null
}

lock="/tmp/.$( basename $0 ).lock"
is_locked "$lock" && exit 1
lock_on "$lock"
## do something here


chrism01 10-23-2009 01:21 AM

Slight amendment to post #2:

-q = quiet, but also "Exit immediately with zero status if any match is found, even if an error was detected."
http://linux.die.net/man/1/grep
Personally I'd want to know if an error occurred.
The '&& exit' is also redundant.

Otherwise, yes, use ps. The problem with lockfiles is that if a program crashes, the lockfile is not removed(!).
Check for the program actually running, imho.

Disillusionist 10-23-2009 02:52 AM

Another thing to consider is:

If the script that you are checking for is the script you are running from, check the process id and compare it against the value of $$

catkin 10-23-2009 06:21 AM

Quote:

Originally Posted by edenCC (Post 3729320)
One of my friends gave me another solution

It's not as good as your solution because of the race condition between
Code:

fuser "$f" &> /dev/null
and
Code:

eval "exec $freefd< \"$f\""
Another instance of the script could use fuser, find no lock and also open the lock file.

Co-operative locking between asynchronous processes requires an atomic operation, that is an operation which is guaranteed not to be interrupted between initiation and completion. AFAIK there are only two such filesystem-related primitives: ln and mkdir.

The existence of procmail's "lockfile" command saves shellscript coders a lot of work in this area! It includes a -l timeout function to set the maximum age of a lockfile -- useful if you can be sure of the maximum time the lock must be held for.

For debugging and sysadmin it is helpful if the process that created a log file writes its ID info into the lockfile. I don't think procmail's lockfile supports that.


All times are GMT -5. The time now is 07:24 AM.