In order to understand DNS failover, let’s first understand what is a failover, when is this used and what are the advantages of using failover architecture.
WHAT IS FAILOVER?
Failover is the operational process of switching between primary and secondary server in the event of downtime. Such downtime could be caused by either scheduled maintenance, or unpredicted system or component failure.
In either case, the object is to create fault tolerance. To ensure that mission-critical applications or systems are constantly available, regardless of the type or extent of the fault. In the larger picture, failover is a key component of business continuity plans, especially for businesses that are computer or computing-centric.
ADVANTAGES OF FAILOVER ARCHITECTURE
The main advantage of server failover implementation is High Availability (HA). In the events of the primary server failure, the failover/secondary server will serve the request coming from the end-users. Which means there will be almost zero downtime which in turn saves you from revenue loss due to downtime.
Also, failover implementation will simplify the maintenance due to the downtime caused by disaster/upgrades/updates. In case there is no failover, sudden downtime becomes a mess to handle and will cause panic to the entire team. But with failover implementation, there will be no downtime as the request will be served by the secondary server.
The DNS failover is a technique where the traffic routing to secondary happens from the DNS. It is done through health-checking agents which monitor the availability of each endpoint. In the event of failure, this agent route traffic away from the failed endpoint to other healthy endpoints, using “round robin” methodology.
It can guarantee that if a pre-defined server/site is offline, traffic is automatically routed to a secondary IP address.