Hadoop High Availability – Namenode Automatic Failover

Boost your career with Free Big Data Courses!!

Before Hadoop 2.0 that is Hadoop 1.0 faced a single point of failure (SPOF) in NameNode. This means if the NameNode failed the entire system would not function and manual intervention was necessary to bring the Hadoop cluster up with the help of secondary NameNode which resulted in overall downtime. With Hadoop 2.0 we had single standby node to facilitate automatic failover and with Hadoop 3.0 which supports multiple standby nodes, the system has become even more highly available. In this tutorial, we will talk about Hadoop high availability. We will look at various types of failover and discuss in detail how the components of Zookeeper provide for automatic failover.

hadoop high availability

Hadoop High Availability – Automatic Failover

1. What is Hadoop High Availability?

With Hadoop 2.0, we have support for multiple NameNodes and with Hadoop 3.0 we have standby nodes. This overcomes the SPOF (Single Point Of Failure) issue using an extra NameNode (Passive Standby NameNode) for automatic failover. This is the high availability in Hadoop.

i. What is Failover?

Failover is a process in which the system transfers control to a secondary system in an event of failure.

There are two types of failover:

  • Graceful Failover – In this type of failover the administrator manually initiates it. We use graceful failover in case of routine system maintenance. There is a need to manually transfer the control to standby NameNode it does not happen automatically.
  • Automatic Failover – In Automatic Failover, the system automatically transfers the control to standby NameNode without manual intervention. Without this automatic failover if the NameNode goes down then the entire system goes down. Hence the feature of Hadoop high availability is available only with this automatic failover, it acts as your insurance policy against a single point of failure.

2. NameNode High Availability in Hadoop

Automatic failover in Hadoop adds up below components to a Hadoop HDFS deployment:

  • ZooKeeper quorum.
  • ZKFailoverController Process (ZKFC).

i. Zookeeper Quorum

Zookeeper quorum is a centralized service for maintaining small amounts of data for coordination, configuration, and naming. It provides group services and synchronization. It keeps the client informed about changes in data and track client failures. Implementation of automatic HDFS failover relies on Zookeeper for:

  • Failure detection-  Zookeeper maintains a session with NameNode. In the event of failure, this session expires and the zookeeper informs the other NameNodes to start the failover process.
  • Active NameNode election-  Zookeeper provides a method to elect a node as an active node. Hence whenever his active NameNode fails, other NameNode takes on exclusive lock in the Zookeeper, stating that it wants to become the next active NameNode.

ii. ZKFailoverController (ZKFC)

ZKFC is a client of Zookeeper that monitors and manages the namenode status. So, each of the machines which run namenode service also runs a ZKFC.

ZKFC handles:

Health Monitoring – ZKFC periodically pings the active NameNode with Health check command and if the NameNode doesn’t respond it in time it will mark it as unhealthy. This may happen because the NameNode might be crashed or frozen.

Zookeeper Session Management –  If the local NameNode is healthy it keeps a session open in the Zookeeper. If this local NameNode is active, it holds a special lock znode. If the session expires then this lock will delete automatically.

Zookeeper-based Election – If there is a situation where local NameNode is healthy and ZKFC gets to know that none of the other nodes currently holds the znode lock, the ZKFC itself will try to acquire that lock. If it succeeds in this task then it has won the election and becomes responsible for running a failover. The failover is similar to manual failover; first, the previously active node is fenced if required to do so and then the local node becomes the active node.

3. Summary

Hence, in this Hadoop High Availability article, we saw Zookeeper daemons configure to run on three or five nodes. Since Zookeeper does not have high resource requirement it could be run on the same node as the HDFS Namenode or standby Namenode. Many operators choose to deploy third Zookeeper process on the same node as the YARN Resource Manager. So, it is advised to keep Zookeeper data separate from HDFS metadata i.e. on different disks as it will give the best performance and isolation.

You must check the latest Hadoop Interview Questions for your upcoming interview.

Still, if any doubt regarding Hadoop High Availability, ask in the comments. We will definitely get back to you.

Did you like this article? If Yes, please give DataFlair 5 Stars on Google

follow dataflair on YouTube

2 Responses

  1. Pranay says:

    My question is how Hadoop knows which namenode is active and which is standby.. Also because if zookeeper fails then who is going to tell Hadoop..

  2. K P says:

    The active name node is the one holding the znode lock. Also for the case of Journal nodes, the Journal nodes only allow one NN to write to the journal nodes to avoid split brain scenario.

Leave a Reply

Your email address will not be published. Required fields are marked *