LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 06-20-2004, 10:27 PM   #1
slick_willie
Member
 
Registered: May 2003
Posts: 46

Rep: Reputation: 15
How can I check to see if a script is running?


I would like to check if a script is running before it starts executing its code. PS only shows the command the script is running, but it doesn't show the script's name. I've tried various PS options with no luck. Anybody have a suggestion?
 
Old 06-20-2004, 10:45 PM   #2
Dark_Helmet
Senior Member
 
Registered: Jan 2003
Posts: 2,786

Rep: Reputation: 374Reputation: 374Reputation: 374Reputation: 374
Why not use a simple "lock" mechanism? Something like:
Code:
#!/bin/bash

lock_file="/var/script_name/lock_file"

if [ ! -e ${lock_file} ] ; then

  touch ${lock_file}

  # Do stuff here

  rm -f ${lock_file}

fi
You could also have the script echo it's PID into the lock file if you need to know it for whatever reason.
 
Old 06-20-2004, 11:06 PM   #3
slick_willie
Member
 
Registered: May 2003
Posts: 46

Original Poster
Rep: Reputation: 15
I was thinking about doing that, but if the script gets killed in the middle of its processing the file wouldn't get deleted, and the next time I would run the script it would see the file and think the script is already running.
 
Old 06-20-2004, 11:30 PM   #4
Dark_Helmet
Senior Member
 
Registered: Jan 2003
Posts: 2,786

Rep: Reputation: 374Reputation: 374Reputation: 374Reputation: 374
Yeah, it's possible, but is it likely to happen? And if it does happen, wouldn't you probably be aware of it to delete the lock file yourself?

The only other thing I could think of is to make the script give a heartbeat to the lock file. Something like this:
Code:
#!/bin/bash

lock_file="/var/script_name/lock_file"
delay_amount="10"
ok_to_continue="no"

# Check to see if the lock file exists
if [ -e ${lock_file} ] ; then

  # lock_file is present, wait to make sure it's valid (script is running)
  beginning_state=`cat ${lock_file}`
  sleep ${delay_amount}
  new_state=`cat ${lock_file}`

  # A script running would update the lock file with its state. If the state is the
  # same as before, assume the script is dead
  if [ "${beginning_state}" = "${new_state}" ] ; then
    ok_to_continue="yes"
  fi

else
  # No lock present, go right on through
  ok_to_continue="yes"
fi

if [ "${ok_to_continue}" = "yes" ] ; then

  if [ -e ${lock_file} ] ; then
    rm -f ${lock_file}
  fi

  echo startup > ${lock_file}

  # Do something here

  echo at stage 2 > ${lock_file}

  # Do something else here

  # repeat do -> echo ad nauseum

  rm -f ${lock_file}
fi
Of course, there's the chance that the script has to run some horribly long command. Your delay would have to be longer than the longest command's execution for this to work. Otherwise it is possible for the script to (incorrectly) assume the lock file is no longer valid.

Maybe somebody else has a more elegant method. A shell script isn't exactly the best place to create semaphores.

Last edited by Dark_Helmet; 06-20-2004 at 11:31 PM.
 
Old 06-21-2004, 01:00 AM   #5
slick_willie
Member
 
Registered: May 2003
Posts: 46

Original Poster
Rep: Reputation: 15
Thanks for your input, but I don't think that is going to be a more efficiant way than how it is currently set up. Basically here's what's going on. I run a script on three of our servers. When the script loads it prompts for either a c to copy files or r to run a report. After choosing c files are copied than checked for any differences, if there are none it waits 15 minutes then recopies and checks and so forth. When I select r it prints a table with information from all three servers. I want to take the prompt out, so if the script is not running it starts copying automatically, and if it is running it will print a report.
 
Old 06-21-2004, 01:54 AM   #6
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Quote:
How can I check to see if a script is running?
Just have the script create a file with only its process id ($$) in it.
When you start it again, make it look if the file exists, then look if a process with this pid is running and abort if true.
That would be enough to prevent two instances running at the same time.

BTW, you say ps is only showing the command the process is running and not the shell script name. I think ps should display the running script too, can you give an example to understand what is wrong.
 
Old 06-21-2004, 02:04 AM   #7
Dark_Helmet
Senior Member
 
Registered: Jan 2003
Posts: 2,786

Rep: Reputation: 374Reputation: 374Reputation: 374Reputation: 374
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive.

I think it's a moot discussion though. jlliagre is right about ps and the script (on my system at least). I made a simple script that just sleeps for 30 seconds. I ran it in the background, punched up "ps aux" and got this result:

Code:
not_me   13853  0.0  0.0  3812 1004 pts/2    S    01:54   0:00 /bin/bash ./simple_script.bash
not_me   13862  0.0  0.0  3504  548 pts/2    S    01:55   0:00 sleep 30
The script is the only thing issuing the sleep command. So there is an entry in ps to represent the script itself.
 
Old 06-21-2004, 09:14 AM   #8
aluser
Member
 
Registered: Mar 2004
Location: Massachusetts
Distribution: Debian
Posts: 557

Rep: Reputation: 43
You can use the builtin trap command, at least on bash, to catch most signals which would kill your script, and do a "rm -f $lockfile; exit" or something like that. You can't catch a SIGKILL of course, but it's normal for applications not to attempt to deal with it.
 
Old 06-21-2004, 10:44 AM   #9
jlliagre
Moderator
 
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
Assuming normal termination (exit or non sigkill signals) is the rule and that the lockfile is removed by these normal exits (by the handler aluser describe), the risk of having the pid reused is very low.

For example, if on your system a new process is launched every second, that would leave more than 8 hours before its pid is reused, this with the default process table size (32k I think).

You can increase the proc table size to make this possibility even lower.

Moreover, as it is possible to know the name of the command attached to a process id (like Dark_Helmet shows), it would be easy to detect the case where the process using a given pid is not the one we expect.
 
Old 06-21-2004, 04:15 PM   #10
slick_willie
Member
 
Registered: May 2003
Posts: 46

Original Poster
Rep: Reputation: 15
What is the best way to have the script echo it's own PID?
 
Old 06-21-2004, 04:38 PM   #11
aluser
Member
 
Registered: Mar 2004
Location: Massachusetts
Distribution: Debian
Posts: 557

Rep: Reputation: 43
Code:
echo $$
 
Old 06-21-2004, 05:14 PM   #12
Hko
Senior Member
 
Registered: Aug 2002
Location: Groningen, The Netherlands
Distribution: Debian
Posts: 2,536

Rep: Reputation: 111Reputation: 111
Quote:
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive
True, but I think this can be prevented by checking with ps (or /proc/<pid>/cmdline directly) to check if the PID found in the "lock" file is running the script.
 
Old 03-15-2007, 09:49 AM   #13
rizwanrafique
Member
 
Registered: Jul 2006
Distribution: Debian, Ubuntu, openSUSE, CentOS
Posts: 147

Rep: Reputation: 19
Quote:
Originally Posted by Dark_Helmet
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive.

I think it's a moot discussion though. jlliagre is right about ps and the script (on my system at least). I made a simple script that just sleeps for 30 seconds. I ran it in the background, punched up "ps aux" and got this result:

Code:
not_me   13853  0.0  0.0  3812 1004 pts/2    S    01:54   0:00 /bin/bash ./simple_script.bash
not_me   13862  0.0  0.0  3504  548 pts/2    S    01:55   0:00 sleep 30
The script is the only thing issuing the sleep command. So there is an entry in ps to represent the script itself.
Little late reply...but well!

Why not use ps to see if the reused pid belongs to same process? Or as someone else mentioned /proc/.../cmdline.
 
Old 03-16-2007, 10:18 AM   #14
kshkid
Member
 
Registered: Dec 2005
Distribution: RHEL3, FC3
Posts: 383

Rep: Reputation: 30
Few things that should be noted while creating a lock file, to prevent multiple process from spawning

here it is

When creating a lockfiles to control process from spawning when the previous instance are still running.

It is better to avoid common names to the locking files and redirecting just the process id of the process to the control file.

First if common names are use, there is high probability that a same kind of naming convention (same name to the locking file ) be used by other scripts as well so that it would also use the same filename for its own purpose.

If process id is used as a value in the locking file, in a busy system there is a high possibility that a process 'A' running with pid -> pid1 is done with its work and again system can grant a new process 'B' the same pid -> pid1 and we end up controlling a process that shouldn’t be actually.

Hence its better to add some more information like parent process id or timestamp something like that to guarantee the uniqueness.

And the final thing could be to lock the file with perm bits once it is written, so those process which tend to overwrite them will receive an error. Though this is not so secured this way is a bit ahead.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Obtain ip address and check for running process via Bash Script? xconspirisist Programming 10 09-12-2008 01:18 PM
script to check if process is dead or running rspurlock *BSD 6 04-12-2004 11:32 PM
script to check if the service is running eyt Linux - Newbie 2 02-16-2004 07:27 AM
[script] check for a running process mikshaw Linux - Software 2 01-13-2004 08:33 PM
"Check if running" Script... bfloeagle Linux - Software 2 05-23-2002 08:12 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 08:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration