The InputFormat defines the data split i.e. logical division of data. But the actual read of data is done by the RecordReader.
RecordReader generates the key-value pair from the split which is given as input to the Map task.
public abstract RecordReader<K, V>
createRecordReader(InputSplit split, TaskAttemptContext context)
throws IOException, InterruptedException;
The split is calculated by getSplit(), the map task pass the split to createRecordReader() method on InputFormat to get the key-value pair which is passed and processed by Mapper function.
If there are N splits it would use N RecordReader intstance and N Map task to process the same
Follow the link for more detail: RecordReader in Hadoop