hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "HadoopMapReduce" by TeppoKurki
Date Wed, 19 Apr 2006 04:38:09 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by TeppoKurki:
http://wiki.apache.org/lucene-hadoop/HadoopMapReduce

New page:
= How Map and Reduce operations are actually carried out =
== Introduction ==

This document describes how Map and Reduce operations are
carried out in Hadoop.

== Map ==

As the Map operation is parallelized the input file set is first
split to several pieces called FileSplits. If an individual file
is so large that it will affect seek time it will be split to
several Splits. The splitting does not know anything about the
input file's internal logical structure, for example
line-oriented text files are split on arbitrary byte boundaries.
Then a new MapTask is created per FileSplit.

When an individual MapTask task starts it will open a new output
writer per configured Reduce task. It will then proceed to read
its FileSplit using the RecordReader it gets from the specified
InputFormat. InputFormat parses the input and generates
key-value pairs. It is not necessary for the InputFormat to
generate both meaningful keys and values. For example the
default TextInputFormat's output consists of input lines as
values and somewhat meaninglessly line start file offsets as
values - most applications only use the lines and ignore the
offsets.

As key-value pairs are read from the RecordReader they are
passed to the configured Mapper. The user supplied Mapper does
whatever it wants with the input pair and calls
<a
	href="http://lucene.apache.org/hadoop/docs/api/org/apache/hadoop/mapred/OutputCollector.html#collect(org.apache.hadoop.io.WritableComparable,%20org.apache.hadoop.io.Writable)">
	OutputCollectore.collect
</a>
with key-value pairs of its own choosing. The output it
generates must use one key class and one value class, because
the Map output will be eventually written into a SequenceFile,
which has per file type information and all the records must
have the same type (use subclassing if you want to output
different data structures). The Map input and output key-value
pairs are not related typewise or in cardinality.

When Mapper output is collected it is partitioned, which means
that it will be written to the output specified by the
Partitioner. The default HashPartitioner uses the key value's
hashcode (which means that for even workload on the Reduce tasks
the key class hashCode must be good).

N input files will generate m map tasks to be run and each map
task will generate as many output files as there are reduce
tasks configured in the system. Each output file will be
targeted at a specific reduce task and the map output pairs from
all the Map tasks will be routed so that all pairs for a given
key end up in files targeted at a specific reduce task.

== Combiner ==
The rationale behind using a combiner is that as the Map
operation outputs its pairs they are already available in
memory for a reduce-type function. If a Combiner is used the
Map output is not immediately written to the output. Instead it
will be collected in lists, one list per each key value. When a
certain number of key-value pairs has been written this buffer
is flushed by passing all the values of each key to the
Combiner's reduce method and outputting the key-value pairs of
the combine operation as they were created by the original map
operation.

For example a word count MapReduce application whose Map
operation outputs (word, 1) pairs as words are encountered in
the input can use a combiner to speed up processing. A combine
operation will start gathering the output in in-memory (instead
of on disk) lists, one list per word. Once a certain number of
pairs is output the combine operation will be called once per
unique word with the list available as an iterator. The combiner
then emits (word, count-in-this-part-of-the-input) pairs. From
the viewpoint of the Reduce operation this contains the same
information as the original Map output, but there will be a lot
less bits to output to disk and read from disk.

== Reduce ==
When a reduce task starts it will have its input scattered in
several files possibly on several DFS nodes. If run in
distributed mode these need to be first copied to the local
filesystem in a
<acronym>copy phase</acronym>
(see
<a
	href="http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTaskRunner.java?view=markup">
	ReduceTaskRunner
</a>
) .

Once all the data is available locally it is appended to one
file (
<acronym>append phase</acronym>
). The file is then merge sorted so that the key-value pairs for
a given key are contiguous (
<acronym>sort phase</acronym>
). This makes the actual reduce operation simple: the file is
read sequentially and the values are passed to the reduce method
with an iterator reading the input input file until the next key
value is encountered. See
<a
	href="http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTask.java?view=markup">
	ReduceTask
</a>
for details.

In the end the output will consist of one output file per Reduce
task run. The format of the files can be specified with
<a
	href="http://lucene.apache.org/hadoop/docs/api/org/apache/hadoop/mapred/JobConf.html#setOutputFormat(java.lang.Class)">
	JobConf.setOutputFormat
</a>
. If SequentialOutputFormat is used the output Key and Value
classes must also be specified.

Mime
View raw message