hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "HowManyMapsAndReduces" by OwenOMalley
Date Wed, 05 Jul 2006 22:05:34 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by OwenOMalley:
http://wiki.apache.org/lucene-hadoop/HowManyMapsAndReduces

New page:
= Partitioning your job into maps and reduces =

Picking the appropriate size for the tasks for your job can radically change the performance
of Hadoop. Increasing the number of tasks increases the framework overhead, but increases
load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case
where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces
where the framework runs out of resources for the overhead.

== Number of Maps ==

The number of maps is usually driven by the number of DFS blocks in the input files. Although
that causes people to adjust their DFS block size to adjust the number of maps. The right
level of parallelism for maps seems to be around 10-100 maps/node, although we have taken
it up to 300 or so for very cpu-light map tasks. 
Task setup takes awhile, so it is best if the maps take at least a minute to execute.

Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just
a hint to the !InputFormat for the nubmer of maps. The default !InputFormat behavior is to
split the total number of bytes into the right number of fragments. However, the DFS block
size of the input files is treated as an upper bound for input splits. A lower bound on the
split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and
have 128MB DFS blocks, you'll end up with 82k maps, unless your mapred.map.tasks is even larger.

== Number of Reduces ==

The right number of reduces seems to be between 1.0 to 1.75 * (nodes * mapred.tasktracker.tasks.maximum).
At 1.0 all of the reduces can launch immediately and start transfering map outputs as the
maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch
a second round of reduces doing a much better job of load balancing.

Currently the number of reduces is limited to roughly 1000 by the buffer size for the output
files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point,
but until it is it provides a pretty firm upper bound.

The number of reduces also controls the number of output files in the output directory, but
usually that is not important because the next map/reduce step will split them into even smaller
splits for the maps.

Mime
View raw message