Quote:
Originally Posted by Sponge_Bob
If I understand correctly keepalived work on the Network layer (IP) ny sending a ping ?
if the ping is unsuccessful redirect to another server
|
Honestly, it was like 6 years ago I last used it, do my memory is a tiny bit vague over what did what. I think we used keepalived between two instances of haproxy to ensure that one of them always had the "public" IP address and then used haproxy to check the backends via http/s
Quote:
Originally Posted by Sponge_Bob
On the other-hand if I setup a failover trough nginx HTTP_upstream_module
the check will happen on the Application layer.
|
Probably, there's several ways to achieve this kind of result.
Quote:
Originally Posted by Sponge_Bob
is there a possibility to act on the DNS to make an "alive/responding" check ?
|
If you're talking about using checks and then doing a public facing DNS change to reflect the IP then this is the WORST IDEA EVER! Why? Because you have zero control over the TTL and record propagation times. I know some ISPs don't necessarily respect short TTL times and even today it's still recommended to allow "up to 24hrs" for public DNS changes to propagate.
Quote:
Originally Posted by Sponge_Bob
I know the Round-robin DNS exist, but I'm afraid that it's limited to load balancing, I don't want load balancing, I just want the failover. (meaning always the same main server with one or more backup)
|
Yeah, Round Robin is the poor mans / last resort load balancing.
As you understand (although many don't!) there's differences between HA - High Availability and LB - Load Balancing. Ultimately both rely on something, somewhere, doing a check to see if one or more backends are still responding.
Ok, I just found my network diagram from "back in the day", we used two haproxy instances and had keepalived running between them to ensure that one of those instances had a specific IP address. Those two instances had multiple inbound listener ports configured in haproxy, each of those listeners had multiple (up to 6!) backends. The backend checks in haproxy made calls to the relevant service, not relying on ping, to ensure the listener service was functioning. If the backend didn't respond correctly, or quick enough, then haproxy would start using a different backend. There was backend preferences, so if the "primary" backend started responding properly then haproxy would start using that again over preference to any currently active backends.