ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I am writing a script that will be run by cron every 5 minutes to catch FTP activity on my server to mirror certain files to another server. The problem is, sometimes the script can run for 20-30 minutes if there are a lot of files.
But if the script is already running, I don't want cron to start another one until the script is finished.
So I am putting this snippet at the top of my code:
Code:
argument=`ps -C ${0##*/} -o args= | wc -l`
if [ "$argument" -gt 1 ]; then
exit 1
fi
Because the script is calling it, there is going to be 1 process running it. That's fine. The problem is, when I started testing that snippet, it would always come back 1 more than I expected.
If there was only one script running, $argument would equal '2'. If there were two, $argument would equal '3'..etc..
So I ran just...
Code:
echo `ps -C ${0##*/} -o args=`
...and it came back showing only one line:
/bin/bash ./script.sh
instead of
/bin/bash ./script.sh
/bin/bash ./script.sh
Then I just ran it without the backticks and it came back correct.
Code:
ps -C ${0##*/} -o args= | wc -l
If the backticks add another process, then why doesn't it show up. It only spits out one line. So I'm really confused why simply assigning it a variable/backticks would add 1 to it? And how can I prevent this from happening. Thanks
Last edited by gatorpower; 02-11-2009 at 02:47 PM.
I'm not on a *nix box now so can't test this but could the extra process be related to the cron? Run the command manually when the script is scheduled to run via the cron and see how many processes appear.
I'm not on a *nix box now so can't test this but could the extra process be related to the cron? Run the command manually when the script is scheduled to run via the cron and see how many processes appear.
Not running on cron yet. In order to run multiple instances, I put a 'sleep 60' in the code, so it will take up the processing time for a minute.
It seems to always add an extra line regardless of the search through the processes. It just has me stumped. Obviously, I could just +1 to the code, but that's cheating and stupid. I want to know why it's not working as logically laid out.
For example:
Code:
echo `ps -C bash -o args=`
will result in something like:
/bin/bash
Where
Code:
echo `ps -C bash -o args= | wc -l`
will result on the same computer as:
2
I can even > redirect it into a text file, then test the text file and it will have the correct answer (1), but when I run it from shell, it always adds one more line than expected.
The classic soln to this potential overrun problem is not to use cron, write a daemon instead and just sleep for 5 mins at the bottom of the loop.
IOW, pseudo-code
Code:
while true
do
check_ftp_and_do_stuff
sleep 300
done
much simpler than process checking etc.
For the paranoid you could put a watchdog in cron to restart the daemon if it fails, but then you just check for the name of the program
The classic soln to this potential overrun problem is not to use cron, write a daemon instead and just sleep for 5 mins at the bottom of the loop.
IOW, pseudo-code
Code:
while true
do
check_ftp_and_do_stuff
sleep 300
done
much simpler than process checking etc.
For the paranoid you could put a watchdog in cron to restart the daemon if it fails, but then you just check for the name of the program
ps -ef|grep prog_name
I really like the idea of running a daemon, but that seems like a lot more coding than just the 4 lines I had. ...Though I will start looking into it now.
Also, to answer another comment, I don't run rsync because I'm not just syncing directories. I am monitoring files, processing files and moving the results to another server it they meet certain criteria that's not as simple as file name, size..etc..
I also have discovered in some testing that the wc -l error has only affected processes that use the bash binary.
in my opinion writing a daemon to do the work is not so simple, because you have to write a second cron job to control, if your deamon is still working ;-)
The reason for your ps problem probably could be, that ps sometimes lists a fork of your script, in this case 2 instances are running. Try to find it out by printing out the pid and ppid of the found processes. I don't know for sure, if I'm right ;-)
Another way to check for running instances is a lock file (this method is often used): At the top of your script check, if a defined lock file exists and if the pid stored there still exists. If not, write your own pid into this lock file and remove it at the end of your script. Add a trap to ensure, that the lock file is also being removed, if your script receives a signal.
To answer your original question, both the depreciated `...` and $(...) bash form execute the ... command in a new sub-shell. Thus you have created a second shell process with your `...` command, so the count you get includes that second process.
Wouldn't be easier just to create a semaphore file in /tmp (or, as is customary, in /var/lock) that your script could test for before continuing?
Of course, to be safe you should trap any script errors and remove the semaphore file before aborting the script. (The advantage of using /var/lock is that that directory is usually emptied when the system boots so old lock files aren't preserved across a boot.)
Last edited by PTrenholme; 02-12-2009 at 07:35 PM.
If PTrenholme hadn't said it, I was going to. I'd suggest you use his solution.
Personally, I am interested in trying chrism01's solution as I've never written a daemon process before. What is involved in actually making a script a daemon process?
If PTrenholme hadn't said it, I was going to. I'd suggest you use his solution.
Personally, I am interested in trying chrism01's solution as I've never written a daemon process before. What is involved in actually making a script a daemon process?
To answer your original question, both the depreciated `...` and $(...) bash form execute the ... command in a new sub-shell. Thus you have created a second shell process with your `...` command, so the count you get includes that second process.
Thank you (and jan61) for this suggestion. I went this route because it was more reliable anyway.
I did catch the result, though, it still confuses me
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.