What is safe mode Problem? How user come out of safe mode in HDFS?

Free Online Certification Courses – Learn Today. Lead Tomorrow. Forums Apache Hadoop What is safe mode Problem? How user come out of safe mode in HDFS?

Viewing 4 reply threads
  • Author
    Posts
    • #5781
      DataFlair TeamDataFlair Team
      Spectator

      What is Safe Mode problem in Hadoop?
      When Namenode enters in safe mode and why?

    • #5784
      DataFlair TeamDataFlair Team
      Spectator

      Safe Mode is a state in Hadoop where the HDFS cluster goes in read only mode i.e. no data can be written to the blocks and no deletion or replication of blocks can happen. During this state, the Namenode goes into maintenance mode.The Namenode implicitly goes into this mode at the startup of the HDFS cluster because at startup, the Namenode gives some time to the data nodes to account for their Data Blocks so that it does not start the replication process without knowing whether there are sufficient replicas already present or not.

      Once, all the validations are done by the Name node, the safe mode is implicitly disabled.

      Sometimes, it so happens that the Namenode is not able to come out of the safe mode e.g.
      NameNode allocated a block and then was killed before the HDFS client got the addBlock response. After NameNode restarted, it couldn’t get out of Safe Mode waiting for the block which was never created. In this case, we are not able to write data to the HDFS as it is still in safe mode which is read-only.

      To resolve this, we need to manually exit out of the safe mode by running the following command: sudo -u hdfs hadoop dfsadmin -safemode leave

    • #5786
      DataFlair TeamDataFlair Team
      Spectator

      Safe mode is the read only mode for HDFS cluster. When the namenode starts, it loads the filesystem state from fsimage and edits log file. After which it waits for data nodes to report their blocks so that it does not prematurely start replicating the blocks.

      It exits the safe mode once 99.9% of the blocks in the whole filesystem meet their minimum replication level (set in the dfs.namenode.replication.min).

      Sometimes, the namenode cannot come out of safe mode, then you can manually exit using
      sudo -u hdfs hadoop dfsadmin -safemode leave” command

    • #5788
      DataFlair TeamDataFlair Team
      Spectator

      On startup, the Name Node enters a state called Safe mode.
      During Safe mode:

      1. Replication and Deletion of data blocks do not occur.( HDFS Cluster is in read only)

      2Name Node receives Heartbeat and of block reports from datanodes. This contains list of f data blocks that a Data Node is hosting.

      To know the status of Safemode, use command:
      hdfs dfsadmin –safemode get

      To enter Safemode, use command:
      hdfs dfsadmin -safemode enter

      To come out of Safemode, use command:
      hdfs dfsadmin -safemode leave

    • #5789
      DataFlair TeamDataFlair Team
      Spectator

      In a Safe-mode HDFS Cluster goes in a Read-only state, where it does not perform Blocks replication or deletion.

      On Startup Name Node goes into Safe mode. In Safe Mode Name node collects Block reports from the data nodes and once he data node confirms that there are Blocks available for storage then Name Node come out of Safe mode

      – To Know the status of Safe mode –
      $hdfs dfs -safemode get

      – To Enter Safe mode-
      $hdfs dfs -safemode start

      – To Leave Safe Mode –
      $hdfs dfs -safemode leave

Viewing 4 reply threads
  • You must be logged in to reply to this topic.