The Blocks are of fixed size (128 MB in Hadoop 2), so it is very easy to calculate the number of blocks that can be stored on a disk.The main reason for having the HDFS blocks in large size,i.e, 128mb is to reduce the cost of seek time
HDFS block concept simplifies the storage of the datanodes. HDFS datanodes does not maintain metadata of blocks like file permissions and other detail. HDFS Namenode take care of all this, i.e. it maintains the metadata of all the blocks.
If the size of the file is less than the HDFS block size, then the file does not occupy the complete block storage. File in HDFS is chunked into blocks, so it is stroing a file that is larger than the disk size is easier as the data blocks are distributed and stored on multiple nodes in a Hadoop cluster.
Follow the link to learn more about HDFS Data Blocks