Live instructor-led & Self-paced Online Certification Training Courses (Big Data, Hadoop, Spark) Forums Hadoop How many number of reducers run when we submit Map-Reduce Job

This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam3 1 year, 6 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
  • #5213


    How many number of reducers run when we submit Map-Reduce Job ?
    Can we control no of reducers ?
    What is the best practices to choose no of reducers to get best performance ?



    In Hadoop MapReduce, Mapper processes each input record (from RecordReader ) and generates key-value pairs. Reducer takes a set of an intermediate key-value pair generated by Mapper as input and runs a reduce function on each of them to generate output. Reduceroutput is the final output, which is stored in HDFS . Reducer performs aggregation/summation sort of computation.

    With the help of Job.setNumreduceTasks (int) the user set the number of reducers for the job. The right number of reducers is calculated by:

    0.95 or 1.75 multiplied by (<no. of nodes>*<no. of maximum container per node>)
    As the map finishes, with 0.95 all the reduces can launch immediately and start transferring map outputs. Faster nodes will finish the first round of reduces with 0.75 and launch the second wave of reduces which do much better job of load balancing.

    When Hadoop framework increases reducers then:

    • Framework overhead increases.
    • Load balancing increases.
    • The cost of failures decreases.

    To study in detail please follow: Reducer in Hadoop

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.