A computation requested by application is efficient when it is executed closer to data.
But in case of Hadoop which works with huge volume of data, when we submit MapReduce job against these data,
which is spread across the cluster, needs to be copied to the node where job is being executed.
This would result in network bottleneck and also time consuming process.
Hadoop’s solution to this network bottleneck and to execute the computations parallely; is to move the algorithm closer to the data rather than data to algorithm. This is called as data locality.
Follow the link to learn more about: Data Locality in Hadoop