How Fault Tolerance is achieved in Hadoop?

Free Online Certification Courses – Learn Today. Lead Tomorrow. Forums Apache Hadoop How Fault Tolerance is achieved in Hadoop?

Viewing 2 reply threads
  • Author
    Posts
    • #5997
      DataFlair TeamDataFlair Team
      Spectator

      What is meant by fault tolerance in Hadoop?

    • #5998
      DataFlair TeamDataFlair Team
      Spectator

      Hadoop is a distributed computing framework in which Data and Computation both are distributed among the Nodes.

      Data divided into Multiple blocks and then have multiple copies on different nodes. So if any node goes down, data retrieval can be done through other nodes. In this way fault tolerance is achieved in Hadoop

      Follow the link to learn more about Fault Tolerance in Hadoop

    • #6000
      DataFlair TeamDataFlair Team
      Spectator

      Irrespective of any node failure in a cluster if that cluster works normally then we call that as Fault Tolerance,
      Generally when file is stored in hdfs it is divided into blocks (this is taken care by the Hadoop api’s) and then these blocks are stored in nodes in a cluster and then the copy of each block is stored in another node, default replication factor in hdfs is 3 (no three replicas can be in same rack because if any rack goes down we can have that replica in another rack ) and as soon as the node goes down the blocks present in that node get replicated in another nodes (this is taken care by the name node) so even if any node or rack goes down user can access the data he wants, in this way fault tolerance is achieved in hadoop

      For more detail please follow Fault Tolerance in Hadoop

Viewing 2 reply threads
  • You must be logged in to reply to this topic.