I'm using a perl script to tail an application log (the app doesn't support syslog). The script eliminates some lines through regex, and then uses Unix::Syslog to put it to the normal syslog daemon. This script is run as a daemon with daemontools, and the parsing/inserting needs to be real-time.
The problem is, when there is particularly heavy writing in the application's log (sometimes over 20 lines per second) the perl script starts parsing the file from the beginning again, giving me tons of duplicate entries.
Here is a copy of the script. Any ideas on how to fix it, or better yet, a more elegant & reliable solution?
Code:
#!/usr/bin/perl -w
use File::Tail;
my $Filename = '/var/log/application.log';
my $File = File::Tail->new(name=>$Filename, tail=>1, interval=>1.0);
if (not defined $File) {
die "/usr/local/bin/taillog.pl: cannot read input \"$Filename\": $!\n";
}
use Unix::Syslog qw(:macros :subs);
LOOP: while (defined($_=$File->read)) {
chomp;
if ( (/blah blah blah/) or # ignore this because...
(/blah blah blah/) or # also ignore this
(/blah blah blah/) ) {
next;
}
elsif (/.*/) {
openlog "taillog", LOG_PID , LOG_LOCAL6;
syslog LOG_INFO,"$_";
closelog;
next;
}
}