This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam3 1 year, 2 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #4896


    Why 128MB chosen as default block size in Hadoop?
    What is data block size in HDFS?
    Why is HDFS Block size 128 MB in Hadoop?
    What is Block in HDFS?



    The Blocks are of fixed size (128 MB in Hadoop 2), so it is very easy to calculate the number of blocks that can be stored on a disk.The main reason for having the HDFS blocks in large size,i.e, 128mb is to reduce the cost of seek time

    HDFS block concept simplifies the storage of the datanodes. HDFS datanodes does not maintain metadata of blocks like file permissions and other detail. HDFS Namenode take care of all this, i.e. it maintains the metadata of all the blocks.
    If the size of the file is less than the HDFS block size, then the file does not occupy the complete block storage. File in HDFS is chunked into blocks, so it is stroing a file that is larger than the disk size is easier as the data blocks are distributed and stored on multiple nodes in a Hadoop cluster.

    Follow the link to learn more about HDFS Data Blocks

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.