HDFS Data Write Operation – An HDFS Tutorial 6


1. Objective

In this tutorial, we will discuss the procedure of data storage in HDFS. The tutorial briefs HDFS introduction & an end to end data write pipeline in HDFS. The HDFS data write operation starts with client interaction with namenode and actual write is done on datanodes.

2. HDFS Video Tutorial

3. Introduction of HDFS

The File System or the Storage layer of Hadoop is HDFS. It takes care of reliably storing data- huge sizes of data in the range of petabytes. In HDFS data is divided into blocks and stored across multiple machines present in the cluster. Currently, HDFS is known as most reliable data storage system on this planet. To learn more about HDFS features follow this guide.

4. HDFS Data Write Operation

This section of HDFS tutorial will explain how HDFS data write operation is performed in Hadoop?

HDFS Data Write operation- an HDFS Tutorial

i. Interaction of Client with NameNode

If the client has to create a file inside HDFS then he needs to interact with the namenode (as namenode is the centre-piece of the cluster which contains metadata). Namenode provides the address of all the slaves where the client can write its data. The client also gets a security token from the namenode which they need to present to the slaves for authentication before writing the block. Below are the steps which client needs to perform in order to write data in HDFS:

To create a file client executes create() method on DistributedFileSystem. Now DistributedFileSystem interacts with the namenode by making an RPC call for creating a new file having no blocks associated with it in the filesystem’s namespace. Various checks are executed by the namenode in order to make sure that there is no such file, already present there and the client is authorized to create a new file.

If all this procedure gets the pass, then a record of the new file is created by the namenode; otherwise, file creation fails and an IOException is thrown to the client. An FSDataOutputStream returns by the DistributedFileSystem for the client in order to start writing data to datanode. Communication with datanodes and client is handled by DFSOutputStream which is a part of FSDataOutputStream.

ii. Interaction of client with Datanodes

After the user gets authenticated to create a new file in the filesystem namespace, Namenode will provide the location to write the blocks. Hence the client directly goes to the datanodes and start writing the data blocks there. As in HDFS replicas of blocks are created on different nodes, hence when the client finishes with writing a block inside the slave, the slave then starts making replicas of a block on the other slaves. And in this way, multiple replicas of a block are created in different blocks. Minimum 3 copies of blocks are created in different slaves and after creating required replicas, it sends an acknowledgment to the client. In this manner, while writing data block a pipeline is created and data is replicated to desired value in the cluster.

Let’s understand the procedure in great details. Now when the client writes the data, they are split into the packets by the DFSOutputStream. These packets are written to an internal queue, called the data queue. The data queue is taken up by the DataStreamer. The main responsibility of DataStreamer is to ask the namenode to properly allocate the new blocks on suitable datanodes in order to store the replicas. List of datanodes creates a pipeline, and here let us assume the default replication level is three; hence in the pipeline there are three nodes. The packets are streamed to the first datanode on the pipeline by the DataStreamer, and DataStreamer stores the packet and this packet is then forwarded to the second datanode in the pipeline.

In the same way, the packet is stored into the second datanode and then it is forwarded to the third (and last) datanode in the pipeline.

An internal queue known as ”ack queue” of packets that are waiting to be acknowledged by datanodes is also maintained. A packet is only removed from the ack queue if it gets acknowledged by all the datanodes in the pipeline. A client calls the close() method on the stream when it has finished writing data.

When executing the above method, all the remaining packets get flushed to the datanode pipeline and before contacting the namenode it waits for acknowledgments to signal that the file is complete. This is already known to the namenode that which blocks the file is made up of, and hence before returning successfully it only has to wait for blocks to be minimally replicated.

Play with HDFS, Follow Frequently used HDFS Command List Tutorial.

Big Data Hadoop Training Course


Leave a comment

Your email address will not be published. Required fields are marked *

6 thoughts on “HDFS Data Write Operation – An HDFS Tutorial