Reducer in Hadoop is a another program like Mapper which reduces a set of intermediate output values to a smaller set of values. This intermediate output it accepts from Mappers in Key, Value [K,V] form. It is second phase of applying custom business logic to the data. In reducer, generally the data is aggregated, filtered or combined to achieve the final output i.e the lightweight processing.
As soon as the first mapper finishes its task, output from mapper is sent to reducer and it continues till the last Mapper finishes its task. Data received from different mappers is again sorted based on the similar keys from all the Mappers.
The shuffle and sort process occur in parallel. Then the data from different mappers is merged and sent to reducer and final output is saved to HadoopHDFS. In case of Multiple reducers, output from mappers are divided between the reducers.
Follow the link for more detail: Reducer in Hadoop