LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 01-17-2019, 12:39 AM   #1
catiewong
Member
 
Registered: Aug 2018
Posts: 190

Rep: Reputation: Disabled
is there any way to check the website availability regularly


my company site is running apache 2.4 on centos , sometimes have user complaint it can not access the website , but I didn't find error from log .

I think there are some website checker may check availability , but what I want is regularly check without input the URL manually .

would advise is there any good way , for example , writing a script to check http connection return 200 or not

thanks
 
Old 01-17-2019, 01:03 AM   #2
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
Create a crontab entry with
Code:
curl -i YOUR_URL
and send yourself an email if you get a status code other than 200.

The -i option displays headers and the status code.

Alternatively:
Code:
wget YOUR_URL
An exit code other than 0 indicates a problem. The error status code (404 etc) is printed as well.

Last edited by berndbausch; 01-17-2019 at 01:06 AM.
 
1 members found this post helpful.
Old 01-17-2019, 04:49 AM   #3
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7
Posts: 3,475

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
Sign up for https://nodeping.com/
 
Old 01-17-2019, 05:44 AM   #4
l0f4r0
Member
 
Registered: Jul 2018
Location: Paris
Distribution: Debian
Posts: 900

Rep: Reputation: 290Reputation: 290Reputation: 290
You can use Website-Watcher (https://www.aignes.com/) as well.
Of course, it's *really* worth it if you have *many* other tasks/websites to give it otherwise it would be like killing a fly with a cannon
 
Old 01-17-2019, 07:06 PM   #5
catiewong
Member
 
Registered: Aug 2018
Posts: 190

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by l0f4r0 View Post
You can use Website-Watcher (https://www.aignes.com/) as well.
Of course, it's *really* worth it if you have *many* other tasks/websites to give it otherwise it would be like killing a fly with a cannon
yes , I know there are paid software may do that , is there any good way is free eg. the above suggestion create wget curl cron job to check ?
 
Old 01-18-2019, 04:04 AM   #6
l0f4r0
Member
 
Registered: Jul 2018
Location: Paris
Distribution: Debian
Posts: 900

Rep: Reputation: 290Reputation: 290Reputation: 290
^ Indeed it is a paid software. I've just provided it in case you didn't know about it because it can really be so useful/indispensable for any advanced website monitoring, especially when looking for specific keywords. It's a must-have in a competitor/informational monitoring...
I don't know the basic internal commands this software relies on though...

This being said, you can absolutely meet your requirement with simple curl/wget requests with a cron as already provided.
 
Old 01-18-2019, 04:13 AM   #7
Turbocapitalist
LQ Guru
 
Registered: Apr 2005
Distribution: Linux Mint, Devuan, OpenBSD
Posts: 7,307
Blog Entries: 3

Rep: Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721Reputation: 3721
the curl option would be a capital -I or spelled out as --head to just check the headers. That will reduce the load on the server and cost lest bandwidth. The exit code is 0 if everything is OK.

Code:
while curl --silent --head http://www.example.com/home/ > /dev/null; 
do 
        sleep 360; 
done; 

aplay /usr/share/orage/sounds/Boiling.wav; 
DISPLAY=:0.0 xmessage -center "web site 'www.example.com' is down $(date)"
Or if you are getting into more serious numbers of servers and services then look at tools like Zabbix
 
Old 01-18-2019, 04:54 AM   #8
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7
Posts: 3,475

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
Quote:
Originally Posted by catiewong View Post
yes , I know there are paid software may do that , is there any good way is free eg. the above suggestion create wget curl cron job to check ?
The problem with monitoring internally is that it only goes part of the way. For example, if you are monitoring a server from itself then that doesn't take in to account anything like the external network or internet connection. Hence why I mentioned NodePing, they attempt to connect to the given URL just like a regular user so it's a more realistic test of whether the site is available.

Quote:
Originally Posted by Turbocapitalist View Post
Or if you are getting into more serious numbers of servers and services then look at tools like Zabbix
If you're building an internal monitoring system then also consider Nagios or the fork Centreon.

At the moment I'm using a central Centreon instance with remote pollers to monitor 3500 service on 350 hosts across three data centers, the pollers also monitor each other and can send alerts independently of the central instance. There's also external with NodePing for 3rd party remote checking of connectivity to sites. Externally we use Site24x7 to check specific URLs, and I've just noticed that Site24x7 have a free plan with e-mail alerting for up to 5 URLs
 
Old 01-19-2019, 08:54 AM   #9
Wayne Sallee
Member
 
Registered: Jun 2011
Location: Florida
Distribution: The one that I built. (lfs)
Posts: 269

Rep: Reputation: 17
You could set up a server in another location to do the checks. Any linux computer in a different location, can be easily set up to do this for you.

Wayne Sallee
Wayne@WayneSallee.com
http://www.WayneSallee.com
 
Old 01-22-2019, 08:25 AM   #10
seanmancini
LQ Newbie
 
Registered: Jan 2019
Location: Toronto
Distribution: Ubuntu,RHEL,Debian
Posts: 2

Rep: Reputation: Disabled
I use https://uptimerobot.com/

They have a free basic web check based on HTTP responses it has worked well for me

Sean Mancini
https://www.seanmancini.com
sean@seanmancini.com
 
Old 09-21-2019, 06:30 AM   #11
rnturn
Senior Member
 
Registered: Jan 2003
Location: Illinois (SW Chicago 'burbs)
Distribution: openSUSE, Raspbian, Slackware. Previous: MacOS, Red Hat, Coherent, Consensys SVR4.2, Tru64, Solaris
Posts: 2,801

Rep: Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550Reputation: 550
Quote:
Originally Posted by TenTenths View Post
At the moment I'm using a central Centreon instance with remote pollers to monitor 3500 service on 350 hosts across three data centers, the pollers also monitor each other and can send alerts independently of the central instance. There's also external with NodePing for 3rd party remote checking of connectivity to sites. Externally we use Site24x7 to check specific URLs, and I've just noticed that Site24x7 have a free plan with e-mail alerting for up to 5 URLs
When I was wrangling a Nagios monitoring system, we made sure to also monitor the routers as well as the hosts. That way we could keep the alerts due to a router outage down to just the router instead alerts for everything that Nagios couldn't reach through that router. The server team was especially happy to not receive alerts for something that the network team needed to handle.

Cheers...
 
Old 09-21-2019, 08:41 AM   #12
gani
Member
 
Registered: Jun 2004
Location: Metro Manila, Philippines
Distribution: Linuxmint, Slackware
Posts: 356

Rep: Reputation: 34
You can try what I have done to automatically monitor via cron crucial processes that are just suddenly shutting down with no apparent reason. The script will verify the process PID, if not available, it means the process has shutdown and the script will start it on its own. This test script was done on my Linuxmint (Ubuntu) desktop, just do necessary adjustments on your side. Here is the script:

Code:
PID=$(pidof apache2)
if [ "$PID" = "" ]; then
    rm -f /var/run/apache2/*.pid
    systemctl start apache2.service # or /etc/init.d/apache2 start
    if [ "$(pidof apache2)" != "" ]; then
        echo "apache2 restarted from unexpected shutdown..."
    fi	
fi
You may place this script in /root or in your choice. Say name it as: chkapache2.sh. You may first test the script by shutting down apache2 first.

Crontab entries (Just my periodic schedule suggestion):

59 22 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 3 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 5 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 7 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 10 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 12 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 15 * * * sh /root/chkapache2.sh > /dev/null 2>&1
0 18 * * * sh /root/chkapache2.sh > /dev/null 2>&1

I've been doing this on my running mailserver on Slackware that amavisd-new, postfix and even postgrey is just suddenly shutting down after some long period of operations.
 
Old 09-23-2019, 04:07 AM   #13
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7
Posts: 3,475

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
Quote:
Originally Posted by rnturn View Post
When I was wrangling a Nagios monitoring system, we made sure to also monitor the routers as well as the hosts. That way we could keep the alerts due to a router outage down to just the router instead alerts for everything that Nagios couldn't reach through that router. The server team was especially happy to not receive alerts for something that the network team needed to handle.

Cheers...
I used hosts as a generic term, in our situation this includes routers, switches, UPS, etc. In my previous role it also included environmental sensors such as temperature, humidity, moisture sensors, etc. It's NOT fun to get an SMS message that the water detection sensor in the comms room floor has activated
 
Old 09-24-2019, 09:26 AM   #14
tyler2016
Member
 
Registered: Sep 2018
Distribution: Debian, CentOS, FreeBSD
Posts: 243

Rep: Reputation: Disabled
I use Zabbix to monitor my stuff. I am pretty happy with it. It does have a little bit of a learning curve, but I picked it up pretty quickly.
 
Old 09-24-2019, 01:36 PM   #15
bathory
LQ Guru
 
Registered: Jun 2004
Location: Piraeus
Distribution: Slackware
Posts: 13,163
Blog Entries: 1

Rep: Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032Reputation: 2032
Monit
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How can I set my server to check hard drive regularly countrydj Linux - Server 8 09-28-2013 06:50 AM
how to achieve high availability website saiyen2002 Linux - Server 3 04-11-2011 10:21 AM
[SOLVED] What would be the best way to regularly chown a directory of files? daweefolk Linux - General 6 01-19-2011 04:34 PM
need someone to point out the way for making this kind of website website rastiazul Linux - General 1 01-15-2011 12:59 AM
Do linux users need to regularly check for spyware??? caleb star Linux - Security 1 02-23-2005 10:09 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 03:27 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration