In Hadoop Failure of one node doesn’t affect accessing (read-write operation) of data in datanode. Multiple copies of same Block will be available in other datanode, So failure of one node will not impact our work and we can make use of block from other datanode when one of the datanode(slaves) fails.
Using Replication Factor we can achieve to make multiple block into datanode. By default the replication factor is 3 in HDFS. But you can increase the replication as per your requirement.We can change the replication factor in hdfs-site.xml which is located under “<HADOOP-HOME>/etc/hadoop” directory.
Also we can change replication factor in command line using hdfs command “hdfs dfs -setrep -R -w 3 hadoop-env.sh”
hadoop-env.sh ->Located in “<HADOOP-HOME>/etc/hadoop”
Follow the link to learn for about Fault Tolerance in HDFS