LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Linux - Software (http://www.linuxquestions.org/questions/linux-software-2/)
-   -   Limiting Cron by Load Average (http://www.linuxquestions.org/questions/linux-software-2/limiting-cron-by-load-average-717506/)

auximini 04-07-2009 02:28 PM

Limiting Cron by Load Average
 
Hello,

Is it possible with vixie cron to limit cron's activities by the current load average? Basically I'm looking for the same functionality that's in fcron, but I'm restricted to use the vixie cron with CentOS 5.3.

If not, does anyone have any recommendations on limiting the impact that cron has when it runs 20-30 user-specified crontabs at once?

Thanks
Joe

rylan76 04-08-2009 02:47 AM

Well Hetzner (a German hosting provider) uses something called "processwatch" to regulate crontabs (and all processes on their server). They don't seem to control the number of cron processes, but if something runs for too long (not sure how "too long" is defined...) it is killed automatically.

I mostly got around this by staggering my cronjobs, i. e.

Code:

MAILTO=""
2 0 * * * /usr/home/fwfunn/public_html/gen_backups.sh >/dev/null 2>&1
3 0 * * * /home/httpd/cgi-bin/php5 /usr/home/fwfunn/public_html/daily_cron.php >/dev/null 2>&1
4 0 1 * * /home/httpd/cgi-bin/php5 /usr/home/fwfunn/public_html/monthly_cron.php >/dev/null 2>&1
05 00 * * * /home/http/cgi-bin/php5 /usr/home/fwfunn/public_html/fix_funeral_dep_group_type_mismatches.php >/dev/null 2>&1

I. e. run gen_backups.sh at 00:02 each day, then run daily_cron.php at 00:03 each day, etc.

Maybe you can also enforce staggering? Hetzner does this by limiting their web-interface for configuring your domain-wide cronjob by making it impossible to specify cronjobs "tighter" than once every ten minutes. They thus ensure that you won't be able to overload the server with cronjobs that run too quickly, one after the other.

(I have my own server with them, so I have SSH access, I manually wrote the above crontab, not through their interface - i. e. that is how I could run stuff each minute, not just every ten minutes as their KonsoleH web interface enforces).

auximini 04-08-2009 08:11 AM

Hi Stefan,

Thanks for your reply and the information! I will look into something like processwatch and see if I can utilize it.

Staggering cron jobs does work and it works great. We had a server that was constantly hitting a load of 80 or higher and we couldn't figure out why. Finally we noticed that there were several hundred cron jobs running during the 5,10,30 minute intervals of the hour. I staggered all the jobs by 1-2 minutes and the load has not gone above 8 now.

So, it works great, but we have 40 servers to maintain and need to find a more universal solution. That's why I was looking into a queuing feature or I will even give processwatch a try and just kill the jobs.

I would love to move to fcron with it's queuing and load average support, but from what I read it is not 100% compatible with vixie cron (no cron.d support) and so I don't want to break our standard CentOS setup.

rylan76 04-09-2009 05:39 AM

Ok well, glad to be of help.

Just a word about processwatch. I'm not sure (of course) what your setup or service level agreements with your customers / users are, but processwatch can be a nasty surprise to someone who might not be aware it is present. I. e. I never even -knew- it was running on Hetzner servers, until I one day scheduled a hugely important, reasonably "heavy" cronjob and it never completed - leaving a database halfway archived (in my own semantic view of the data) and causing chaos for some of my customers. I had a real storm of excrement to deal with when I got to the office that morning, and only THEN did I find out that processwatch was running on Hetzner servers, and I would NOT be able to execute the cronjob as planned.

I. e. just warn or publicize the fact (if you do use it or something like it) that you will be deploying this kind of load-limiting device for cronjobs, if you decide to do it, don't just quietly deploy it so it might catch some unwary developers - like it caught me!

Regards,

the_First_Called 09-10-2010 03:40 PM

may be this helps...
 
To prevent cron jobs from run during high system load I wrote some script. It checks the load and if it too high sleeps another period of time.
When loadavg become low it run its command-line.
For the cron I modified etc/crontab (F10 distro):

Code:

# run-parts
01 * * * * root loadlimit run-parts /etc/cron.hourly
02 4 * * * root loadlimit run-parts /etc/cron.daily
22 4 * * 0 root loadlimit run-parts /etc/cron.weekly
42 4 1 * * root loadlimit run-parts /etc/cron.monthly

where loadlimit is the name of my script:

Code:

#!/bin/bash

# Script to limit cron jobs by high system loadavg.
# (Cron don't honor loadavg at all)
# Some ideas are from:
# http: //linuxhostingsupport.net/blog/shell-script-to-monitor-load-average-on-a-linux-server
# Author: A.V.Varlashkin (varlash at newmail dot ru)

# Define Threshold. This value will be compared with the current load average.
# Set the value as per your wish. Scale: 100 = 1.00
#LIMIT=130
LIMIT=100
TIME2WAIT=5m

while
# Retrieve the load average of the past 15 minute (change $5 to $3 or $4 for other loadavg)
Load_AVG=`uptime | cut -d'l' -f2 | awk '{print $5}' | sed "s/\.//"`
# Compare the current load average with the Threshold and
[ $Load_AVG -gt $LIMIT ]
do
# wait until it become acceptable
        sleep $TIME2WAIT
done;

# Load is low enough, run the command
# echo $@
exec $@

Beware since script is only lightly tested, so
#include <discard.all>
Good luck!
Andrei.


All times are GMT -5. The time now is 12:46 AM.