This topic contains 1 reply, has 1 voice, and was last updated by  dfbdteam3 1 year, 6 months ago.

Viewing 2 posts - 1 through 2 (of 2 total)
  • Author
    Posts
  • #4868

    dfbdteam3
    Moderator

    How to calculate the number of mappers in Hadoop?
    How to set no of mappers for a MapReduce job?
    How to change no of mappers in the cluster?

    #4869

    dfbdteam3
    Moderator

    The number of Mappers that Hadoop creates is determined by the number of Input Splits you have in your Data.
    Relation is simple:

    No. of Mappers = No. of Input Splits.

    So, in order to control the Number of Mappers, you have to first control the Number of Input Splits Hadoop creates before running your MapReduce program. One of the easiest ways to control it is setting the property ‘mapred.max.split.size’ while running your MR program.

    Example:
    Let’s assume your Input data is 1 TB. So, number of Physical Data Blocks = (1 * 1024 * 1024 / 128) = 8192 Blocks.
    By Default, if you don’t specify the Split Size, it is equal to the Blocks (i.e.) 8192. Thus, your program will create and execute 8192 Mappers !!!

    Let’s say you want to create only 100 Mappers to handle your job.
    As mentioned above, 100 Mappers means 100 Input Splits. So each Split size should be set to (1 * 1024 * 1024 / 100) = 10486 MB

    Execute it as follows:
    hadoop jar <your-script.jar> <main class> -Dmapred.max.split.size=10486 <input file> <output directory>

    Follow the link to learn more about Mapper in Hadoop

Viewing 2 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic.