LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 02-11-2009, 02:34 PM   #1
gatorpower
LQ Newbie
 
Registered: Feb 2009
Posts: 15

Rep: Reputation: 0
Bash Programming Problem with 'wc -l"


Hello.

I am writing a script that will be run by cron every 5 minutes to catch FTP activity on my server to mirror certain files to another server. The problem is, sometimes the script can run for 20-30 minutes if there are a lot of files.

But if the script is already running, I don't want cron to start another one until the script is finished.

So I am putting this snippet at the top of my code:

Code:
argument=`ps -C ${0##*/} -o args= | wc -l`

if [ "$argument" -gt 1 ]; then
exit 1
fi
Because the script is calling it, there is going to be 1 process running it. That's fine. The problem is, when I started testing that snippet, it would always come back 1 more than I expected.

If there was only one script running, $argument would equal '2'. If there were two, $argument would equal '3'..etc..

So I ran just...

Code:
echo `ps -C ${0##*/} -o args=`
...and it came back showing only one line:

/bin/bash ./script.sh

instead of

/bin/bash ./script.sh
/bin/bash ./script.sh

Then I just ran it without the backticks and it came back correct.

Code:
ps -C ${0##*/} -o args= | wc -l
If the backticks add another process, then why doesn't it show up. It only spits out one line. So I'm really confused why simply assigning it a variable/backticks would add 1 to it? And how can I prevent this from happening. Thanks

Last edited by gatorpower; 02-11-2009 at 02:47 PM.
 
Old 02-11-2009, 05:04 PM   #2
Big_Vern
LQ Newbie
 
Registered: Jan 2008
Posts: 9

Rep: Reputation: 1
I'm not on a *nix box now so can't test this but could the extra process be related to the cron? Run the command manually when the script is scheduled to run via the cron and see how many processes appear.
 
Old 02-11-2009, 05:22 PM   #3
gatorpower
LQ Newbie
 
Registered: Feb 2009
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Big_Vern View Post
I'm not on a *nix box now so can't test this but could the extra process be related to the cron? Run the command manually when the script is scheduled to run via the cron and see how many processes appear.
Not running on cron yet. In order to run multiple instances, I put a 'sleep 60' in the code, so it will take up the processing time for a minute.

It seems to always add an extra line regardless of the search through the processes. It just has me stumped. Obviously, I could just +1 to the code, but that's cheating and stupid. I want to know why it's not working as logically laid out.

For example:

Code:
echo `ps -C bash -o args=`
will result in something like:

/bin/bash

Where

Code:
echo `ps -C bash -o args= | wc -l`
will result on the same computer as:

2

I can even > redirect it into a text file, then test the text file and it will have the correct answer (1), but when I run it from shell, it always adds one more line than expected.
 
Old 02-11-2009, 05:23 PM   #4
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
The classic soln to this potential overrun problem is not to use cron, write a daemon instead and just sleep for 5 mins at the bottom of the loop.
IOW, pseudo-code

Code:
while true
do
    check_ftp_and_do_stuff
    sleep 300
done
much simpler than process checking etc.
For the paranoid you could put a watchdog in cron to restart the daemon if it fails, but then you just check for the name of the program

ps -ef|grep prog_name
 
Old 02-11-2009, 05:23 PM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
Accidental double post: removed

Last edited by chrism01; 02-12-2009 at 06:07 PM.
 
Old 02-11-2009, 05:36 PM   #6
tmcguinness
LQ Newbie
 
Registered: Dec 2008
Posts: 11

Rep: Reputation: 0
Just curious but why aren't you using rsync?

It does everything you want to do without all the hassle of writing a deamon.
 
Old 02-11-2009, 06:18 PM   #7
gatorpower
LQ Newbie
 
Registered: Feb 2009
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by chrism01 View Post
The classic soln to this potential overrun problem is not to use cron, write a daemon instead and just sleep for 5 mins at the bottom of the loop.
IOW, pseudo-code

Code:
while true
do
    check_ftp_and_do_stuff
    sleep 300
done
much simpler than process checking etc.
For the paranoid you could put a watchdog in cron to restart the daemon if it fails, but then you just check for the name of the program

ps -ef|grep prog_name
I really like the idea of running a daemon, but that seems like a lot more coding than just the 4 lines I had. ...Though I will start looking into it now.

Also, to answer another comment, I don't run rsync because I'm not just syncing directories. I am monitoring files, processing files and moving the results to another server it they meet certain criteria that's not as simple as file name, size..etc..

I also have discovered in some testing that the wc -l error has only affected processes that use the bash binary.
 
Old 02-12-2009, 11:30 AM   #8
jan61
Member
 
Registered: Jun 2008
Posts: 235

Rep: Reputation: 47
Moin,

in my opinion writing a daemon to do the work is not so simple, because you have to write a second cron job to control, if your deamon is still working ;-)

The reason for your ps problem probably could be, that ps sometimes lists a fork of your script, in this case 2 instances are running. Try to find it out by printing out the pid and ppid of the found processes. I don't know for sure, if I'm right ;-)

Another way to check for running instances is a lock file (this method is often used): At the top of your script check, if a defined lock file exists and if the pid stored there still exists. If not, write your own pid into this lock file and remove it at the end of your script. Add a trap to ensure, that the lock file is also being removed, if your script receives a signal.

Jan
 
Old 02-12-2009, 06:08 PM   #9
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
A daemon doesn't have to be complex. As per my example, its just an infinite loop around your 'real code'.
 
Old 02-12-2009, 07:33 PM   #10
PTrenholme
Senior Member
 
Registered: Dec 2004
Location: Olympia, WA, USA
Distribution: Fedora, (K)Ubuntu
Posts: 4,187

Rep: Reputation: 354Reputation: 354Reputation: 354Reputation: 354
To answer your original question, both the depreciated `...` and $(...) bash form execute the ... command in a new sub-shell. Thus you have created a second shell process with your `...` command, so the count you get includes that second process.

Wouldn't be easier just to create a semaphore file in /tmp (or, as is customary, in /var/lock) that your script could test for before continuing?

Something like:
Code:
[ -f /tmp/$0_running ] && exit
touch /tmp/$0_running
.
.
.
rm /tmp/$0_running
Of course, to be safe you should trap any script errors and remove the semaphore file before aborting the script. (The advantage of using /var/lock is that that directory is usually emptied when the system boots so old lock files aren't preserved across a boot.)

Last edited by PTrenholme; 02-12-2009 at 07:35 PM.
 
Old 02-13-2009, 12:33 AM   #11
JulianTosh
Member
 
Registered: Sep 2007
Location: Las Vegas, NV
Distribution: Fedora / CentOS
Posts: 674
Blog Entries: 3

Rep: Reputation: 90
If PTrenholme hadn't said it, I was going to. I'd suggest you use his solution.

Personally, I am interested in trying chrism01's solution as I've never written a daemon process before. What is involved in actually making a script a daemon process?
 
Old 02-13-2009, 03:19 AM   #12
SaTaN
Member
 
Registered: Aug 2003
Location: Suprisingly in Heaven
Posts: 223

Rep: Reputation: 33
Quote:
Originally Posted by Admiral Beotch View Post
If PTrenholme hadn't said it, I was going to. I'd suggest you use his solution.

Personally, I am interested in trying chrism01's solution as I've never written a daemon process before. What is involved in actually making a script a daemon process?
A pretty good tutorial

http://www.webreference.com/perl/tutorial/9/
 
Old 02-13-2009, 10:30 PM   #13
gatorpower
LQ Newbie
 
Registered: Feb 2009
Posts: 15

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by PTrenholme View Post
To answer your original question, both the depreciated `...` and $(...) bash form execute the ... command in a new sub-shell. Thus you have created a second shell process with your `...` command, so the count you get includes that second process.
Thank you (and jan61) for this suggestion. I went this route because it was more reliable anyway.

I did catch the result, though, it still confuses me

Quote:
echo `ps -C ${0##*/} -o args= | wc -l`

# will equal '2'

echo `ps -C ${0##*/} -o args=`

# will equal '/bin/bash ./script.sh'

echo `ps -C ${0##*/} -o args=` > textfile.txt
cat textfile.txt | wc -l

# will equal '1'

echo `ps -C ${0##*/} -o args= > textfile.txt`
cat textfile.txt | wc -l

# will equal '2'

echo `ps -C ${0##*/} -o args= > textfile.txt`
cat textfile.txt

# will equal '/bin/bash ./script.sh /bin/bash ./script.sh'
Still don't really understand why the standard output would be different, but at least I visually see what happened

Last edited by gatorpower; 02-13-2009 at 10:34 PM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Bash variable problem: cURL -b parameter (string form e.g. "name=value;n2=v2") sithemac Other *NIX 3 07-09-2008 06:15 PM
Simple bash script programming problem ArthurHuang Programming 3 01-28-2008 10:53 AM
Bash script programming problem ArthurHuang Programming 5 01-23-2008 03:24 PM
Standard commands give "-bash: open: command not found" even in "su -" and "su root" mibo12 Linux - General 4 11-11-2007 10:18 PM
Bash Script: Problem running variable command containing "" Paasan Programming 2 01-21-2004 01:45 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 02:09 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration