What is Fault Tolerance in HDFS?

Viewing 1 reply thread
  • Author
    Posts
    • #4888
      DataFlair TeamDataFlair Team
      Spectator

      What is meant by fault tolerance in HDFS?
      How Fault Tolerance is achieved in Hadoop?

    • #4889
      DataFlair TeamDataFlair Team
      Spectator

      In Hadoop Failure of one node doesn’t affect accessing (read-write operation) of data in datanode. Multiple copies of same Block will be available in other datanode, So failure of one node will not impact our work and we can make use of block from other datanode when one of the datanode(slaves) fails.

      Using Replication Factor we can achieve to make multiple block into datanode. By default the replication factor is 3 in HDFS. But you can increase the replication as per your requirement.We can change the replication factor in hdfs-site.xml which is located under “<HADOOP-HOME>/etc/hadoop” directory.

      Also we can change replication factor in command line using hdfs command “hdfs dfs -setrep -R -w 3 hadoop-env.sh”

      hadoop-env.sh ->Located in “<HADOOP-HOME>/etc/hadoop”

      Follow the link to learn for about Fault Tolerance in HDFS

Viewing 1 reply thread
  • You must be logged in to reply to this topic.