This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam3 1 year, 1 month ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #6345

    dfbdteam3
    Moderator

    How data or file is written into HDFS?

    #6347

    dfbdteam3
    Moderator

    Write Operation:-

    * Client node interacts with namenode
    * Namenode provides address of the data nodes on which the write operation should take place
    * Once client receives the data nodes address, it starts to write the information directly on data node
    * Client node does not write information on the replicas of a Block , it writes only to one block
    * The datanodes or slaves know how to share or copy the data from each other. Basically, the slave nodes start to replicate among themselves

    At API level, The client sends a request to the distributed file system to fetch address related details about the blocks on slaves.The write operation uses FSdata Outputstream to write information to a block. As soon as the block is written to the first data node/slave, the replica begins to maintain data consistency among all the blocks in different data nodes
    Once the replication process is complete, the acknowledgment is sent from the last node to the previous node and likewise, it reaches the first node which sends the final acknowledgment to FS data output-stream.
    In case of the crash of a particular datanode , namenode is intelligent enough to send the address of a active datanode hosting that block for write operation to smoothly proceed.All along , the namenode is in constant communication with each of the datanodes or slaves.

    Follow the link for more detail Write Operation:-

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.