ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I would like to check if a script is running before it starts executing its code. PS only shows the command the script is running, but it doesn't show the script's name. I've tried various PS options with no luck. Anybody have a suggestion?
I was thinking about doing that, but if the script gets killed in the middle of its processing the file wouldn't get deleted, and the next time I would run the script it would see the file and think the script is already running.
Yeah, it's possible, but is it likely to happen? And if it does happen, wouldn't you probably be aware of it to delete the lock file yourself?
The only other thing I could think of is to make the script give a heartbeat to the lock file. Something like this:
Code:
#!/bin/bash
lock_file="/var/script_name/lock_file"
delay_amount="10"
ok_to_continue="no"
# Check to see if the lock file exists
if [ -e ${lock_file} ] ; then
# lock_file is present, wait to make sure it's valid (script is running)
beginning_state=`cat ${lock_file}`
sleep ${delay_amount}
new_state=`cat ${lock_file}`
# A script running would update the lock file with its state. If the state is the
# same as before, assume the script is dead
if [ "${beginning_state}" = "${new_state}" ] ; then
ok_to_continue="yes"
fi
else
# No lock present, go right on through
ok_to_continue="yes"
fi
if [ "${ok_to_continue}" = "yes" ] ; then
if [ -e ${lock_file} ] ; then
rm -f ${lock_file}
fi
echo startup > ${lock_file}
# Do something here
echo at stage 2 > ${lock_file}
# Do something else here
# repeat do -> echo ad nauseum
rm -f ${lock_file}
fi
Of course, there's the chance that the script has to run some horribly long command. Your delay would have to be longer than the longest command's execution for this to work. Otherwise it is possible for the script to (incorrectly) assume the lock file is no longer valid.
Maybe somebody else has a more elegant method. A shell script isn't exactly the best place to create semaphores.
Last edited by Dark_Helmet; 06-20-2004 at 11:31 PM.
Thanks for your input, but I don't think that is going to be a more efficiant way than how it is currently set up. Basically here's what's going on. I run a script on three of our servers. When the script loads it prompts for either a c to copy files or r to run a report. After choosing c files are copied than checked for any differences, if there are none it waits 15 minutes then recopies and checks and so forth. When I select r it prints a table with information from all three servers. I want to take the prompt out, so if the script is not running it starts copying automatically, and if it is running it will print a report.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Quote:
How can I check to see if a script is running?
Just have the script create a file with only its process id ($$) in it.
When you start it again, make it look if the file exists, then look if a process with this pid is running and abort if true.
That would be enough to prevent two instances running at the same time.
BTW, you say ps is only showing the command the process is running and not the shell script name. I think ps should display the running script too, can you give an example to understand what is wrong.
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive.
I think it's a moot discussion though. jlliagre is right about ps and the script (on my system at least). I made a simple script that just sleeps for 30 seconds. I ran it in the background, punched up "ps aux" and got this result:
You can use the builtin trap command, at least on bash, to catch most signals which would kill your script, and do a "rm -f $lockfile; exit" or something like that. You can't catch a SIGKILL of course, but it's normal for applications not to attempt to deal with it.
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
Rep:
Assuming normal termination (exit or non sigkill signals) is the rule and that the lockfile is removed by these normal exits (by the handler aluser describe), the risk of having the pid reused is very low.
For example, if on your system a new process is launched every second, that would leave more than 8 hours before its pid is reused, this with the default process table size (32k I think).
You can increase the proc table size to make this possibility even lower.
Moreover, as it is possible to know the name of the command attached to a process id (like Dark_Helmet shows), it would be easy to detect the case where the process using a given pid is not the one we expect.
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive
True, but I think this can be prevented by checking with ps (or /proc/<pid>/cmdline directly) to check if the PID found in the "lock" file is running the script.
I like the pid idea myself, but since he's worried about the process getting killed during execution, then it is possible for the pid to get reused. So, a script starting up would read the pid from the lock file, be unaware the script was killed, see a process using the same pid, and assume the script is alive.
I think it's a moot discussion though. jlliagre is right about ps and the script (on my system at least). I made a simple script that just sleeps for 30 seconds. I ran it in the background, punched up "ps aux" and got this result:
Few things that should be noted while creating a lock file, to prevent multiple process from spawning
here it is
When creating a lockfiles to control process from spawning when the previous instance are still running.
It is better to avoid common names to the locking files and redirecting just the process id of the process to the control file.
First if common names are use, there is high probability that a same kind of naming convention (same name to the locking file ) be used by other scripts as well so that it would also use the same filename for its own purpose.
If process id is used as a value in the locking file, in a busy system there is a high possibility that a process 'A' running with pid -> pid1 is done with its work and again system can grant a new process 'B' the same pid -> pid1 and we end up controlling a process that shouldn’t be actually.
Hence its better to add some more information like parent process id or timestamp something like that to guarantee the uniqueness.
And the final thing could be to lock the file with perm bits once it is written, so those process which tend to overwrite them will receive an error. Though this is not so secured this way is a bit ahead.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.