What is default HDFS Block size?

Viewing 3 reply threads
  • Author
    Posts
    • #6161
      DataFlair TeamDataFlair Team
      Spectator

      What is Block in HDFS? What is the default block size?
      What is default Block size in Hadoop?

    • #6162
      DataFlair TeamDataFlair Team
      Spectator

      In HDFS data is stored in terms of Block.
      It is the size of the file that get divided into when the file is store in any node.
      In the Hadoop the default block size is 128 MB.

      Follow the link for more detail: HDFS Block in Hadoop

    • #6164
      DataFlair TeamDataFlair Team
      Spectator

      The default Block Size on
      Hadoop 1 – 64MB
      Hadoop 2 – 128MB

      Increase in block size improves the performance in case of processing huge datasets, the value can be changed depending on the storage context ,data file size and frequency of access of files by modifying the value of dfs.blocksize in the hdfs-site.xml file

      Follow the link for more detail: HDFS Block in Hadoop

    • #6166
      DataFlair TeamDataFlair Team
      Spectator

      The Default size of HDFS Block is :
      Hadoop 1.0 – 64 MB and in Hadoop 2.0 -128 MB .
      64 MB Or 128 MB are just unit where the data will be stored .
      If particular file is 50 MB ,then HDFS Block will not consume 64MB as default size
      In this particular situation only 50 Mb will be consumed by an HDFS block and 14 MB will be free to store something else.
      It is master node Who does data allocation in an efficient manner.

      Follow the link for more detail: HDFS Block in Hadoop

Viewing 3 reply threads
  • You must be logged in to reply to this topic.