is there any way to check the website availability regularly
Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
You can use Website-Watcher (https://www.aignes.com/) as well.
Of course, it's *really* worth it if you have *many* other tasks/websites to give it otherwise it would be like killing a fly with a cannon
You can use Website-Watcher (https://www.aignes.com/) as well.
Of course, it's *really* worth it if you have *many* other tasks/websites to give it otherwise it would be like killing a fly with a cannon
yes , I know there are paid software may do that , is there any good way is free eg. the above suggestion create wget curl cron job to check ?
^ Indeed it is a paid software. I've just provided it in case you didn't know about it because it can really be so useful/indispensable for any advanced website monitoring, especially when looking for specific keywords. It's a must-have in a competitor/informational monitoring...
I don't know the basic internal commands this software relies on though...
This being said, you can absolutely meet your requirement with simple curl/wget requests with a cron as already provided.
the curl option would be a capital -I or spelled out as --head to just check the headers. That will reduce the load on the server and cost lest bandwidth. The exit code is 0 if everything is OK.
Code:
while curl --silent --head http://www.example.com/home/ > /dev/null;
do
sleep 360;
done;
aplay /usr/share/orage/sounds/Boiling.wav;
DISPLAY=:0.0 xmessage -center "web site 'www.example.com' is down $(date)"
Or if you are getting into more serious numbers of servers and services then look at tools like Zabbix
yes , I know there are paid software may do that , is there any good way is free eg. the above suggestion create wget curl cron job to check ?
The problem with monitoring internally is that it only goes part of the way. For example, if you are monitoring a server from itself then that doesn't take in to account anything like the external network or internet connection. Hence why I mentioned NodePing, they attempt to connect to the given URL just like a regular user so it's a more realistic test of whether the site is available.
Quote:
Originally Posted by Turbocapitalist
Or if you are getting into more serious numbers of servers and services then look at tools like Zabbix
If you're building an internal monitoring system then also consider Nagios or the fork Centreon.
At the moment I'm using a central Centreon instance with remote pollers to monitor 3500 service on 350 hosts across three data centers, the pollers also monitor each other and can send alerts independently of the central instance. There's also external with NodePing for 3rd party remote checking of connectivity to sites. Externally we use Site24x7 to check specific URLs, and I've just noticed that Site24x7 have a free plan with e-mail alerting for up to 5 URLs
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,801
Rep:
Quote:
Originally Posted by TenTenths
At the moment I'm using a central Centreon instance with remote pollers to monitor 3500 service on 350 hosts across three data centers, the pollers also monitor each other and can send alerts independently of the central instance. There's also external with NodePing for 3rd party remote checking of connectivity to sites. Externally we use Site24x7 to check specific URLs, and I've just noticed that Site24x7 have a free plan with e-mail alerting for up to 5 URLs
When I was wrangling a Nagios monitoring system, we made sure to also monitor the routers as well as the hosts. That way we could keep the alerts due to a router outage down to just the router instead alerts for everything that Nagios couldn't reach through that router. The server team was especially happy to not receive alerts for something that the network team needed to handle.
You can try what I have done to automatically monitor via cron crucial processes that are just suddenly shutting down with no apparent reason. The script will verify the process PID, if not available, it means the process has shutdown and the script will start it on its own. This test script was done on my Linuxmint (Ubuntu) desktop, just do necessary adjustments on your side. Here is the script:
Code:
PID=$(pidof apache2)
if [ "$PID" = "" ]; then
rm -f /var/run/apache2/*.pid
systemctl start apache2.service # or /etc/init.d/apache2 start
if [ "$(pidof apache2)" != "" ]; then
echo "apache2 restarted from unexpected shutdown..."
fi
fi
You may place this script in /root or in your choice. Say name it as: chkapache2.sh. You may first test the script by shutting down apache2 first.
Crontab entries (Just my periodic schedule suggestion):
59 22 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 3 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 5 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 7 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 10 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 12 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 15 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 18 * * * sh /root/chkapache2.sh > /dev/null 2>&1
I've been doing this on my running mailserver on Slackware that amavisd-new, postfix and even postgrey is just suddenly shutting down after some long period of operations.
When I was wrangling a Nagios monitoring system, we made sure to also monitor the routers as well as the hosts. That way we could keep the alerts due to a router outage down to just the router instead alerts for everything that Nagios couldn't reach through that router. The server team was especially happy to not receive alerts for something that the network team needed to handle.
Cheers...
I used hosts as a generic term, in our situation this includes routers, switches, UPS, etc. In my previous role it also included environmental sensors such as temperature, humidity, moisture sensors, etc. It's NOT fun to get an SMS message that the water detection sensor in the comms room floor has activated
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.