LinuxQuestions.org
Review your favorite Linux distribution.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 11-09-2007, 04:18 PM   #1
davidcollins001
Member
 
Registered: Apr 2007
Posts: 43

Rep: Reputation: 21
perl - using fork


Hi,

Firstly I appologise for the rambling post but I am stumped with this problem, usually I sit with code until the solution appears, but I am at a loss.

I have written a small perl program to set up a reverse ssh tunnel with my work computer via a cron job. What it does is check a pid file, in /tmp, if it exists it checks whether a process with that pid exists. If the process does exist the program prints the pid to screen and exits. If the process doesn't exist it deletes the pid file and restarts it. If the pid file doesn't exist it starts the process. To start the process it forks another process, the parent puts the child pid into a pid file and the child process execs and starts the ssh tunnel.

It caused me a problem today in that for some reason it kept on spawning processes, so many that I couldn't open a terminal to kill them all. I have been trying to figure it all evening. When I test usually it works as it is supposed to, which is do nothing if the process exists, but occasionally something goes wrong! The pid of the current ssh process appears to change pid (if I find it with ps aux) then the pid file doesn't match the actual process and the program does what it is supposed to and starts a new process.

I think I am using fork correctly but if anyone can see what I am doing wrong or give me any pointers I would really appreciate it! I am open to better ideas of doing this too, but I would also like a solution to this if possible, now it has become a bit of a mission

Here is the code in all it glory (horror!):
Code:
#!/usr/bin/perl


##
## PURPOSE:
##	run reverse ssh to work
##	
##	designed to be run from crontab. creates a lock file so that
##	not more than one instance of the process is started
##

use strict; 
use warnings;
use Fcntl qw (:flock);


## user crontab doesn't have permission in /var
my $hostname="scale";
my $lckfile="/tmp/revssh.${hostname}.pid";


sub start_ssh {

    ## fork process to start ssh
    defined( my $pid=fork ) or die "cannot fork process: $!";


    ## parent - open lock file with child pid
    if($pid) {

	print "Starting process: ${pid}\n";

	open(LOCKFILE,">${lckfile}") or die "Cannot create lock file: $!";
	print LOCKFILE ${pid},"\n";
	close(LOCKFILE);

    } else {

	## child - start ssh process
	exec("ssh -qNfCX -R 2020:localhost:22 ".
	     "phrfad\@${hostname}.astro.warwick.ac.uk")
	  or die "cannot exec process\n";
    }
}




## main

if(! -e ${lckfile}) {

    start_ssh();

} else {

    my $old_pid=`cat ${lckfile}`;
    my @running=split(/\n/,`ps -p ${old_pid}`);


    ## lock file exists - is process still running?
    if ( $#running > 0 ) {
	  die "Process running: ${old_pid}";
      } else {
	  print "Orphan lock file - Lock file deleted\n\t";
	  unlink ${lckfile};
	  start_ssh();
      }
}

Last edited by davidcollins001; 11-09-2007 at 04:27 PM.
 
Old 11-11-2007, 06:31 PM   #2
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,359

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
A few things occur to me:

1. you specify
use Fcntl qw (:flock);
but you don't use it ...

2. you don't need to use ${var} instead of $var all the time, just in the odd place where it could be an issue eg your login or if you wanted ${v1}${v2}

3. print LOCKFILE ${pid},"\n";

I think you want a '.' not a ',' there, although concat is unnecessary, use:

print LOCKFILE "$pid\n";

4. check the unlink rtn value: http://perldoc.perl.org/functions/unlink.html


5. maybe try the Net::SSH or Net::SSH::Perl modules.

HTH
 
Old 11-12-2007, 03:08 AM   #3
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
have you got pgrep and pkill for killing runaways?

I tried your code with exec("sleep 12") and it seems fine.
I get no runaways maybe it's the SSH part?

I quite like your logic here, I usually worry about deleting the lockfile afterwards
this is a neat solution.



did you know that you can send a kill 0 $pid which will return 1
if the process is still alive, 0 otherwise.
(see perldoc -f kill)

Code:
        my $running= kill 0, $old_pid;


        ## lock file exists - is process still running?
        if ( $running > 0 ) {
                      die "Process running: ${old_pid}";
 
Old 11-12-2007, 04:06 AM   #4
davidcollins001
Member
 
Registered: Apr 2007
Posts: 43

Original Poster
Rep: Reputation: 21
Quote:
Originally Posted by chrism01 View Post
A few things occur to me:

1. you specify
use Fcntl qw (:flock);
but you don't use it ...
That was from the first effort using the lock file stuff that is provided, I just forgot to remove it

Quote:
2. you don't need to use ${var} instead of $var all the time, just in the odd place where it could be an issue eg your login or if you wanted ${v1}${v2}
I know but sometimes I get a bit OCD and like my variables to all look the same

Quote:
3. print LOCKFILE ${pid},"\n";

I think you want a '.' not a ',' there, although concat is unnecessary, use:

print LOCKFILE "$pid\n";

4. check the unlink rtn value: http://perldoc.perl.org/functions/unlink.html


5. maybe try the Net::SSH or Net::SSH::Perl modules.

HTH
I will have a look at the ssh modules, I am still relatively new to perl and I don't fully know what it has to offer in the way of modules.

Quote:
have you got pgrep and pkill for killing runaways?

I tried your code with exec("sleep 12") and it seems fine.
I get no runaways maybe it's the SSH part?

I quite like your logic here, I usually worry about deleting the lockfile afterwards
this is a neat solution.
My main computer is OS X and it doesn't have pkill and some other nice linux commands.

Quote:
did you know that you can send a kill 0 $pid which will return 1
if the process is still alive, 0 otherwise.
(see perldoc -f kill)
I didn't know about kill 0, I didn't much like my current way of checking the process as it requires a system function which seemed a bit weak!


I wasn't sure if my code was working but not in the way that I was intending and doing something funny. I have run it a couple of times and this is what I get checking the processes. If I start and check the process, I get the process return the expected pid. But, if I check again a couple of moments later I see that the pid has changed on the ssh process I guess that this isn't a totally random error that I am seeing, and that there is something that I just don't know about either ssh or backgrounded processes. Can anyone shed some light on this for me. I would like to understand this better if the is a real outcome.

Thanks


Code:
David-Collins:~ davidcollins$ revssh;ps aux |grep ssh
Starting process: 7616
davidcol  7616   0.0  0.2    27508    860  p6  R    10:13AM   0:00.03 ssh -qNfCX -R 2020:localhost:2
davidcol  7618   0.0  0.0    27812      4  p6  R+   10:13AM   0:00.00 grep ssh

David-Collins:~ davidcollins$ ps aux |grep ssh
davidcol  7898   0.0  0.1    28312    372  ??  Ss   10:13AM   0:00.00 ssh -qNfCX -R 2020:localhost:2
davidcol  8188   0.0  0.0    27812      4  p6  R+   10:13AM   0:00.00 grep ssh
 
Old 11-12-2007, 04:20 AM   #5
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
aha, well perhaps the ssh is respawning itself also hence you
are losing the pid.

try system instead of exec then it will be a sub-process
of the perl script and the pid will be persistent, slight overhead but there you go.

I don't think the perl SSH libs are what you want, I've been trying myself to
do secure connections but had no luck with them, they seem more for like
browser-like stuff over https.
I cannot forward ports with our installed SSH any ideas?

speaking of system functions, you are using cat,
a sneaky perl-ish way without using open is...
Code:
@ARGV = ($pidfile);  # must be a list
$first_line_of_file = <ARGV>;   # synonym for <>
 
Old 11-12-2007, 02:56 PM   #6
davidcollins001
Member
 
Registered: Apr 2007
Posts: 43

Original Poster
Rep: Reputation: 21
Quote:
Originally Posted by bigearsbilly View Post
aha, well perhaps the ssh is respawning itself also hence you
are losing the pid.

try system instead of exec then it will be a sub-process
of the perl script and the pid will be persistent, slight overhead but there you go.

I don't think the perl SSH libs are what you want, I've been trying myself to
do secure connections but had no luck with them, they seem more for like
browser-like stuff over https.
I cannot forward ports with our installed SSH any ideas?

speaking of system functions, you are using cat,
a sneaky perl-ish way without using open is...
Code:
@ARGV = ($pidfile);  # must be a list
$first_line_of_file = <ARGV>;   # synonym for <>
I was initially using exec because I have been looking at a lot of C recently and exec is the way of doing it there, so I thought it couldn't do much harm. The `cat` system call was going to be one of those things that I eventually changed I actually have a commented out open statement that I removed in the post. It wasn't part of the main problem so I ignored it for now. Your way is very neat I have just given it a go - perl is great

I have just figured it out, ehem *cough* yep it was a silly bug!!! This line:

Code:
    if ( $#running > 0 ) {
@running will always be 0 or 1 with only have one element, so $#running is 0 - this line will always be false! That part of code now reads:

Code:
    @ARGV = ($lckfile);my $old_pid = <ARGV>;
    my $running = kill 0, $old_pid;


    ## lock file exists - is process still running?
    if ( $running == 1 ) {
Thanks for your input, I guess I should have looked closer at my code!
 
Old 11-13-2007, 03:19 PM   #7
davidcollins001
Member
 
Registered: Apr 2007
Posts: 43

Original Poster
Rep: Reputation: 21
I played a bit more and I found the real cause, I think, of this problem. It seemed that the -f switch in the exec call was causing the problem. I have removed it and it seems to work fine now!

Does anyone know how using this switch affect the pid of the process?
 
Old 11-14-2007, 02:27 AM   #8
bigearsbilly
Senior Member
 
Registered: Mar 2004
Location: england
Distribution: Mint, Armbian, NetBSD, Puppy, Raspbian
Posts: 3,515

Rep: Reputation: 239Reputation: 239Reputation: 239
of course, as I guessed earlier, the -f puts ssh into the background
-f = fork

so hence the pid will move about.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
about fork eshwar_ind Programming 5 02-11-2004 03:38 AM
over-fork luzi82 Linux - Newbie 2 01-02-2004 06:55 AM
perl(Cwd) perl(File::Basename) perl(File::Copy) perl(strict)....What are those? Baldorg Linux - Software 1 11-09-2003 08:09 PM
Fork again Avatar33 Programming 13 08-22-2003 01:41 PM
fork() lowlifeish Programming 3 11-04-2002 10:50 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 04:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration