Existing software, such as Nagios,
might be the first avenue to explore. This software is generically designed for monitoring of hardware and software state. When you're hitchiking to somewhere, you want to hitch a ride with the biggest, baddest bus that will take you the longest distance toward your final destination in one go, with the least possible effort on your part and preferably with free drinks.
The fallback idea that comes to my mind is some combination of software: daemons and a web-server.
In the Perl environment where I do most of my work these days, there are plenty of existing packages for web-page scraping and other forms of interaction with remote sites. (All of the software I will be speaking of is platform-independent.) You will need to construct an appropriate tool for each site, and type of site, that you must draw information from.) The information will be added to a common database, which should also include some kind of notification-table into which the various daemons make note of newly arrived records.
It will be desirable to also have this table include a list of hosts, the protocol (Perl module...) to be used for each, an active/inactive status, and the time of the last scrape.
A "pool" of daemon processes each sit around, polling the database table to find when a piece of work is to be done, selecting a record and doing the scrape. The pool members are persistent processes, and their numbers and mix (not the number of units-of-work waiting to be done) determine how the work is attempted.
A secure web-page provides an interface to the database, and a control mechanism for end users.
An operating-system agnostic solution can be constructed in any case.
Don't even think
of using "Bash scripts."