In which location NameNode stores its Metadata and why?

Free Online Certification Courses – Learn Today. Lead Tomorrow. Forums Apache Hadoop In which location NameNode stores its Metadata and why?

Viewing 2 reply threads
  • Author
    Posts
    • #5243
      DataFlair TeamDataFlair Team
      Spectator

      Where does NameNode keeps meta data? does it store the same in hdfs or local fs or memory and why ?

    • #5244
      DataFlair TeamDataFlair Team
      Spectator

      In Hadoop, Namenode has two types of files.

      1) edit logs files
      2) FsImage files
      These files are available on namenode disk(persistent data storage).

      when you are starting namenode, latest fsimage file is load into “in-memory”. and at the same time, edit log file is also loaded into memory if fsimage file doesn’t contain up-to date information. The information which is available in edit log(s) will be replayed to update the in-memory of fsimage data. What information is available in editlog, fsimage…etc can be shown as below.

      Namenode stored metadata in “in-memory” in order to serve the multiple client request(s) as fast as possible. If we didn’t stored the metadata information in “in-memory”, then for every operation, namenode has to load the metadata information from the disk to in-memory and start performing various check’s on this metadata information. This process will consume more disk seek time for every operations(Reading from and Writing to Disk is a time consuming process. That’s why metadata information is stored in “In-Memory”. As part of in-memory, it will have both file metadata and bitmap metadata information.

      In in-memory, it contains two types/forms of metadata. 
      1) File metadata
      2) BitMap metadata

      File Metadata: This contains file name, permission, owner, group, others, replication, Blockids[b1,b2,b3..etc: These block ids are unique across the cluster], file size, ..etc. File metadata contains only file to block mapping information. But block to datanode mapping is not available in File Metadata. For each file metadata, it consumes 150 bytes in “in-memory”. For each block metadata, it consumes 150 bytes is used in “in-memory”.

      Block to datanode(s) mapping is available in BitMap metadata. This contains information like Block-ID(s), address of datanode(s), generated timestamp, block length,…etc. BitMap is stored only in “in-memory”. This information is not stored in disk (persistent disk). How can we build this information when we restart the namenode?. Namenode will be in safe-mode after restart was happened. At this time, namenode will wait for signal from datanodes(s). These will send the block report to namenode. With this information namenode will build the BitMap metadata.

      Based on these two metadata(s), namenode can serve the client request.

      Let’s say, i want to write a file”empolyee.txt(300 MB file, replication factor :3)” file on HDFS slave nodes. First off all client need to interact with namenode. Namenode checks the whether the file is already exist or not. Where the namenode check?. Namenode will check in “in-memory” file metadata information. If it is not available, then namenode maintain a record in edit log. After record has written into edit log, namenode start writing the metadata information of this file int “in-memory”. Namenode doesn’t write the file metadata information in fsimage file when the cluster is up and running. For each operation/record, an incremental counter value is placed. Inside the hadoop, writing a file is not a single operation. There involves lot of transations (like replicating the block once first block has written onto first datanode…etc)

      when you convert the binary format of editlog file(s) to xml format. you can see how many transaction(s) has been performed for a single file. For each transaction, a new record will be maintain. For each record, a transaction ID will be assigned.

      hdfs oev -i edit._xxxxxxxxx -o edit_myformat.xml -p XML

      open edit_myformat.xml file and see the internal information of each every transaction has been done.

      what is fsimage and edit log?

      As said earlier, firstly the information is stored in edit logs, then after in-memory of fsimage file got updated(Only in-memory, not in disk). In general, fsimage file information is equal to in-memory file metadata. How fsimage(disk) contain latest information?. This could be achieved by using checkpoint node/secondary node.

      I want to write/store empdept.txt on HDFS.

      Here we want to store file on HDFS. To indicate that whether it is a file or a directory. This information is consider as a metadata. To differentiate whether it is a file or directory, it uses FILE as value for file, DIRECTORY as value for directory. Look at the below open and end type tag.

      <type>FILE</type>

      Next the file name is consider as metadata. To indicate, empdept.txt as filename, it used open and end name tag.
      Next the replication tag which indicates that we have used one (1) for this file particular. I’m in pseudo distribute mode. So we had configured one for replication factor. This information is also a metadata of that particular file. and so on….etc

      <name>empdept.txt</name>
      <replication>1</replication>
      <mtime>1498301552634</mtime>
      <atime>1498301551522</atime>
      <perferredBlockSize>134217728</perferredBlockSize>
      <permission>hdadmin:supergroup:rw-r--r--</permission>

      In in-memory, you can see below information also (My input file size = 51 bytes)
      id: This indicates the block id which is unique across the cluster
      numBytes: This indicates that how many bytes of that particular block has.

      <blocks>
      <block><id>1073741825</id>
      <genstamp>1001</genstamp>
      <numBytes>51</numBytes>
      </block>
      </blocks>

      In the above, 1073741825 number should be same as part of the block name(blk_1073741825) on data node. Look at the below(My data node location).

      hdadmin@ubuntu:~/hdata/dfs/data/current/BP-2145751794-127.0.1.1-1498301240754/current/finalized/subdir0/subdir0$ l
      blk_1073741825 blk_1073741825_1001.meta

      In summary about file metadata (in-memory),

      <INodeSection><lastInodeId>16386</lastInodeId><inode><id>16385</id>
      <type>DIRECTORY</type>
      <name></name>
      <mtime>1498301552643</mtime>
      <permission>hdadmin:supergroup:rwxr-xr-x</permission>
      <nsquota>9223372036854775807</nsquota>
      <dsquota>-1</dsquota>
      </inode>
      <inode>
      <id>16386</id>
      <type>FILE</type>
      <name>empdept.txt</name><replication>1</replication>
      <mtime>1498301552634</mtime><atime>1498301551522</atime>
      <perferredBlockSize>134217728</perferredBlockSize>
      <permission>hdadmin:supergroup:rw-r--r--</permission>
      <blocks>
      <block><id>1073741825</id>
      <genstamp>1001</genstamp>
      <numBytes>51</numBytes>
      </block>
      </blocks>
      </inode>
      </INodeSection>

      For each file,

      type tag: To indicate whether it is a file or a directory.
      name tag: file name
      replication tag: replication factor for that particular file
      perferredBlockSize tag: default block size
      permission: permission for that client
      blocks: we may have one or more no of blocks for a particular file. All block(s) will be part of blocks tag.
      block: It contains ID (id tag), generated timestamp(genstamp tag), number of bytes(numBytes tag).

      All the above information from type tag to blocks tag will be the part of inode tag.

      So in in-memory, it contains only file metadata information. This file metadata information is persistently stored on disk by using check point node or secondary node. Block to datanode mapping is also available in “in-memory”. This metadata is not stored persistently. As i already stated how this could be built. I will talk about secondary node, checkpoint node, difference between hadoop 1.x, hadoop 2.x master nodes (http://data-flair.training/forums/topic/what-is-single-point-of-failure-in-hadoop-1-and-how-it-is-resolved-in-hadoop-2). Probably, I will update/add the information in a day or two days in this link.

    • #5247
      DataFlair TeamDataFlair Team
      Spectator

      Name node stores metadata of the distributed file system such as file to Block mapping, location of blocks on data node, active data nodes, file permissions, owner of files etc.

      This is most critical piece of software in entire HDFS file system, Name node is the first point of contact for any client to perform read and write operations to get metadata and then it can perform file I/O directly with actual data nodes.

      Read-write operation will be effective and optimized only when we get to access the metadata quickly, to achieve speed and simplicity NN stores all information in main memory

      Name node also stores a snapshot of entire metadata states into the local disk as fsimage file, whenever we start NN this copy will be brought into main memory, this helps in overcoming metadata lost problem.

Viewing 2 reply threads
  • You must be logged in to reply to this topic.