In Hadoop, HDFS splits huge file into small chunks that is called Blocks. These are the smallest unit of data in file system.
NameNode (Master) will decide where data store in theDataNode (Slaves). All block of the files is the same size except the last block.
In the Apache Hadoop, the default block size is 128 MB .
It can be easily changed by edit the hdfs-site.xml and add the “dfs.block.size” property.
i.e
<property>
<name>dfs.block.size<name>
<value>256217728<value>
<description>Block size<description>
<property>
Follow the link to learn more about Data Blocks in Hadoop.