Live instructor-led & Self-paced Online Certification Training Courses (Big Data, Hadoop, Spark) Forums Hadoop mkdir: Cannot create directory /data. Name node is in safe mode.

This topic contains 2 replies, has 1 voice, and was last updated by  dfbdteam3 1 year ago.

Viewing 3 posts - 1 through 3 (of 3 total)
  • Author
    Posts
  • #5463

    dfbdteam3
    Moderator

    “While creating directory I am getting below exception: mkdir: Cannot create directory /data. Name node is in safe mode.
    org.apache.hadoop.hdfs.server.namenode.SafeModeException:”

    #5479

    dfbdteam3
    Moderator

    Safemode in Hadoop (Namenode) is a Read-Only mode for the complete Hadoop cluster, where the client is not allowed to write to file system (in HDFS). So if at this point command like mkdir that will make change to file system is run, it will display error as mentioned in question line.

    Now taking a step further how a name node goes into Safe mode.

    Firstly Namenode enters into Safemode when there is a shortage of memory. As a result the HDFS becomes readable only. That means we can not create any additional directory or file in the HDFS, as it may effect replication of files due to less storage.
    Secondly, during start up Namenode loads the filesystem state from fsimage and edits log file. It then waits for data nodes to report their blocks so that it does not prematurely start replicating the blocks which may result in over replication. During this time, Namenode stays in safe mode.
    To come out of the Safe Mode following command can be used.
    $hadoop dfsadmin -safemode leave

    #5480

    dfbdteam3
    Moderator

    Namenode in safe mode means, Hadoop is in read-only mode. At this point, Hadoop won’t allow any files to be written/append into HDFS, i.e., no changes are allowed in HDFS.
    So, if we try to create a directory in HDFS, above error in question will be thrown.

    Namenode may go to safe mode due to:

    Either Namenode is out of the resources(such as memory, filesystem, etc), then HDFS become read-only, as there will not be enough space for storage, etc.
    During Namenode startup, it tries to construct the filesystem metadata by loading fsimage and edits log files into its memory. Then, it waits for the Datanodes to send their block information, so that it doesn’t start replication of blocks, which results in over replication. It waits, till the entire filesystem reaches minimum replication factor (which is by default 1, and can be configured by dfs.replication.min).
    During this wait time, Namenode will be in safe mode.

    To come out of the Safe Mode following command can be used.
    hadoop dfsadmin -safemode leave

Viewing 3 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic.