LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   perl File::Tail hell (https://www.linuxquestions.org/questions/programming-9/perl-file-tail-hell-537413/)

whysyn 03-14-2007 10:03 AM

perl File::Tail hell
 
I'm using a perl script to tail an application log (the app doesn't support syslog). The script eliminates some lines through regex, and then uses Unix::Syslog to put it to the normal syslog daemon. This script is run as a daemon with daemontools, and the parsing/inserting needs to be real-time.

The problem is, when there is particularly heavy writing in the application's log (sometimes over 20 lines per second) the perl script starts parsing the file from the beginning again, giving me tons of duplicate entries.

Here is a copy of the script. Any ideas on how to fix it, or better yet, a more elegant & reliable solution?

Code:

#!/usr/bin/perl -w

use File::Tail;
my $Filename = '/var/log/application.log';
my $File = File::Tail->new(name=>$Filename, tail=>1, interval=>1.0);
if (not defined $File) {
    die "/usr/local/bin/taillog.pl: cannot read input \"$Filename\": $!\n";
}

use Unix::Syslog qw(:macros :subs);

LOOP: while (defined($_=$File->read)) {
    chomp;
    if ( (/blah blah blah/) or # ignore this because...
        (/blah blah blah/) or # also ignore this
        (/blah blah blah/) ) {
      next;
    }
    elsif (/.*/) {
      openlog "taillog", LOG_PID , LOG_LOCAL6;
        syslog LOG_INFO,"$_";
      closelog;
      next;
    }
}


spirit receiver 03-14-2007 01:59 PM

Maybe you could pipe the output of "tail -f" through your script and read from standard input instead?

Something like this:
Code:

tail -f FILENAME | perl -e 'while ( <STDIN> ){ print }'

whysyn 03-14-2007 02:44 PM

Thanks for the suggestion!

We've been down that road before... there are process control issues relating to rolling log files, process crash recovery, etc, etc.


All times are GMT -5. The time now is 06:04 PM.