Live instructor-led & Self-paced Online Certification Training Courses (Big Data, Hadoop, Spark) Forums Hadoop Why HDFS stores data using commodity hardware despite higher chance of failures?

This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam3 1 year, 6 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #4799


    <div class=”post”>

    How Fault Tolerance is achieved in HDFS?
    What is default replication factor?




    There are some reasons that HDFS stores data using commodity hardware despite the higher chance of failures:

    • HDFS is highly fault-tolerant.HDFS provides fault tolerance by replicating the data blocks and distributing it among different DataNodes across the cluster. By, default, replication factor is set to 3 which is configurable. In Hadoop HDFS, Replication of data solves the problem of data loss in unfavorable conditions like crashing of the node, hardware failure and so on. So, when any machine in the cluster goes down, then the client can easily access their data from another machine which contains the same copy of data blocks.
    • It provides distributed processing so that every datanode have sufficient process to do.
    • It has replicas of the block on different datanode so it is economical to store data on commodity hardware.
    • It provide HIGH AVAILABILITY features which mean that availability of data in all condition even in the case of machine failure.

    For more detail please follow: HDFS Tutorial

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.