Can a bash script be made to auto-kill a program based on certain criteria
I am pretty new to linux, and have barely done my first "Hello World" bash script. So looking for a solution simple enough for a newbie like me, and not way over my head.
Anyways, I'm wondering if it is possible to make a simple bash script, which monitors a program, possibly by its log file, and when a certain criteria happens, a kill command is issued to kill the process by name? For example, say you have a program called Bartledoo, and while its running it logs various events, such as event1, event2, event3, etc. You want that program to keep running until it logs event4, at that point you want to issue a "pkill Bartledoo", or "killall Bartledoo", so that the process is terminated after doing that particular event. Is that possible to do? |
It is very possible.
Code:
#!/bin/bash chmod it to 700 and set a cron to run it with similar to Code:
*/1 * * * * /path/to/Bartledoo_checks.sh |
Thank you so much, I will try this soon. While I am in the process of learning to do this stuff on my own as well, I'm sure you saved me a ton of time and frustration with this.
Just one more question. What if the program doesn't have its own log file, but rather is one of those ones that continuously runs in the terminal window until you press ctrl+c, and it prints its messages directly in that window. Is there a way to modify the bash script to do the exact same thing monitoring that window? |
Ref post #3
Yes, you could make your own logfile The trick is redirection http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html Code:
coproc Barteldoo 2>&1 > /path/to/Bartledoo/log It 'forks' and runs as a background process, independent of current shell session ( yoy cqn close current shell and it will still be running The link I gave describes the 2>&1 There you are redirecting stderr to std out Then the > /path/to.... sends stdout to a file Habitual's cron can then monitor the log tldp.org is fantastic , and I also find http://mywiki.wooledge.org/BashGuide/ extremely usefull Ref coproc http://www.gnu.org/software/bash/man...processes.html Using that you could log the 'actual' PID and have a more refined kill target Something else to consider ( especially with bash <4 ) is the $! Var. Which gives you the pid of the previous process to be sent to bg Note, when logging pids you should really test the pid belongs to what you think it belongs to pids get reused once a process exits for what ever reason I will leave that logic to you, probably not needed since killall <exec name> fits as you only run one at a time... but nothing wrong with thinking about 'safety' in even simple scripts |
Quote:
Code:
/path/to/Bartledoo 2>&1 |tee /path/to/Bartledoo_checks.sh |
Quote:
Thanks! |
This thread has been very helpful so far, even though I've been reading through bash guides, I hadn't yet found the $? exit status, until I started reverse engineering the script provided.
Anyways, I have one other question. For the original script mentioned in post 2, which monitors the log file of a certain program, and then kills that program as soon as a particular event happens. Is there a good way to just loop this command, rather than scheduling a cron job? I see there are some "until", "while", and possibly other commands, not sure which would be the best Something like this (only in proper syntax) Code:
#!/bin/bash |
cron is the simple soln, otherwise you'll need to write a loop similar to your suggestion and put it in a file, then
Code:
nohup /path/to/loop.sh >loop.log 2>&1 & Keep reading eg http://rute.2038bug.com/index.html.gz for bash howto. |
So I did look into the cron option, however from what I can tell, the shortest time duration you could make for a cron job was once per minute. Is there a way to do it faster?
I did however find a solution for the first scenario, based on a slight modification of what was posted in post #2 that works well, without cron. Basically just goes like this. Code:
#!/bin/bash |
All times are GMT -5. The time now is 12:04 PM. |