ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420
Rep:
Tail multiple log files.
Hi,
I have got a script to tail multiple log files.
Here is the script below:
Code:
$ vi multi-tail.sh
#!/bin/sh
# When this exits, exit all back ground process also.
trap 'kill $(jobs -p)' EXIT
# iterate through the each given file names,
for file in "$@"
do
# show tails of each in background.
tail -f $file &
done
# wait .. until CTRL+C
wait
Now, I can run this script like this, mentioned below:
Code:
$ ./multi-tail.sh error_log access_log
As, right now I don't have test environment. So, can anyone please help me in getting the exact meaning of this line in the above mentioned script:
Quote:
trap 'kill $(jobs -p)' EXIT
As, it is using kill command. Will it kill background running processes?
trap catches process signals like SIGTERM and SIGKILL (produced by things like ctrl+c and closing the terminal), and lets you redefine what happens when they are received. In this case it has only been configured to handle the EXIT pseudo-signal, killing the list of all sub-processes produced by "jobs -p".
BTW, I'm wondering about the output to all these tail commands. As it stands I believe they'll all go to the same terminal mixed together. Is that what you want?
Actually, I think you could drop the loop and just run:
Code:
tail -f "$@"
There will be only one process, so there's no need to background anything, and it also conveniently prints out which file each new line comes from.
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420
Original Poster
Rep:
Hi David,
Quote:
BTW, I'm wondering about the output to all these tail commands. As it stands I believe they'll all go to the same terminal mixed together. Is that what you want?
Yeah, I tested it and it goes on the same terminal, all mixed up. Here is the output mentioned below:
Quote:
[root@localhost ~]# ./logsrun.sh /var/log/boot.log /var/log/audit/audit.log
./logsrun.sh: line 1: $: command not found
type=USER_START msg=audit(1372928401.467:19): user pid=2585 uid=0 auid=0 ses=2 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1372928401.630:20): user pid=2585 uid=0 auid=0 ses=2 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1372928401.631:21): user pid=2585 uid=0 auid=0 ses=2 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_ACCT msg=audit(1372928461.652:22): user pid=2590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_ACQ msg=audit(1372928461.652:23): user pid=2590 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=LOGIN msg=audit(1372928461.653:24): pid=2590 uid=0 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 old auid=4294967295 new auid=0 old ses=4294967295 new ses=3
type=USER_START msg=audit(1372928461.654:25): user pid=2590 uid=0 auid=0 ses=3 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_open acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=CRED_DISP msg=audit(1372928461.973:26): user pid=2590 uid=0 auid=0 ses=3 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:setcred acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_END msg=audit(1372928461.974:27): user pid=2590 uid=0 auid=0 ses=3 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'
type=USER_AUTH msg=audit(1372928838.324:28): user pid=2644 uid=500 auid=500 ses=1 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 msg='op=PAM:unix_chkpwd acct="satyaveer" exe="/sbin/unix_chkpwd" hostname=? addr=? terminal=? res=success'
Starting acpi daemon: [ OK ]
Starting HAL daemon: [ OK ]
Retrigger failed udev events [ OK ]
Enabling Bluetooth devices:
Starting sshd: [ OK ]
Starting postfix: [ OK ]
Starting abrt daemon: [ OK ]
Starting crond: [ OK ]
Starting atd: [ OK ]
Starting rhsmcertd... [ OK ]
And I don't want like this, all mixed up.
And you're suggesting me to put only
Quote:
tail -f"$@"
. Here it is:
Quote:
#$ vi multi-tail.sh
#!/bin/sh
# When this exits, exit all back ground process also.
trap 'kill $(jobs -p)' EXIT
# iterate through the each given file names,
#for file in "$@"
#do
# show tails of each in background.
tail -f $file &
#done
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420
Original Poster
Rep:
Hi All,
I have to give log files path in the script only. Not while running the script. So, can anyone suggest how and where to add the log files path in the script?
I'm saying you can use a single tail command and remove almost everything else from the script. Process backgrounding and cleanup with trap aren't needed if there's only a single process to manage.
Adding a path prefix is easy, as long as you use bash or another more advanced shell, instead of sh.
The only realistic way you could have separate outputs would be to give each tail command its own terminal window. But that would mean having to use a loop again. I think the above is a good compromise, since tail will clearly label which output comes from which log, as I mentioned before.
I have to give log files path in the script only. Not while running the script. So, can anyone suggest how and where to add the log files path in the script?
Not during running the script:
Following David the H.'s recommendation of just using the tail command, you could simply just place those paths in an array. if you're using bash you can do:
Note that path strings need not to be placed within double quotes ("") inside the array declaration if they don't have white space characters but it's better or cleaner that way.
But then again, wouldn't it be just better to place those along with the command instead? And you don't have to depend on any advanced shell for that.
Also since you don't have to run any other command after that, it would be just better to substitute the calling shell's process with tail's what would become its process:
If I use the above script on linux then both the log files will update periodically.
But if I use it on unix(solaris) then only the first log file specified in script will update periodically.
Why is it so?
Note: Assuming the above mentioned log file in script on unix also.
But if I use it on unix(solaris) then only the first log file specified in script will update periodically.
Why is it so?
First make sure that you're running the script with Bash. If not, use the simpler method which is compatible with all shells based from the original sh.
I tried the script using bash shell. Which version of tail should work fine for multiple log files on unix?
On newer versions probably, but I'm referring to the one packaged with coreutils (don't know which was for Solaris). And I'm also not sure which version of tail has already worked like the one in your Linux system. You could try building tail from a new version of coreutils but you need to consider the possible impact on the system. Perhaps you could install it separately on a different directory like /usr/local/bin or /opt. You don't necessarily need to install the whole coreutils but you also have to consider the required dependencies, even those which are needed to be updated. I'm sorry if I can't be much of help with this and I also find it sensitive but perhaps starting a new thread that would guide you on installing a new version of tail would be helpful. The ones you would really need to consider are the libraries that you might need to upgrade just to make tail work while not breaking linkage of other binaries that also depend on it.
Last edited by konsolebox; 07-13-2013 at 03:28 AM.
Distribution: RHEL, CentOS, Debian, Oracle Solaris 10
Posts: 1,420
Original Poster
Rep:
I have to write a script for logs monitoring on production server. But the problem here is we cannot update the package on our own on production server. We're not authorized to do updates on server. So, this is the situation here...
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.