hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Lucene-hadoop Wiki] Update of "HadoopMapReduce" by TeppoKurki
Date Wed, 19 Apr 2006 04:59:58 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by TeppoKurki:
http://wiki.apache.org/lucene-hadoop/HadoopMapReduce

------------------------------------------------------------------------------
  several Splits. The splitting does not know anything about the
  input file's internal logical structure, for example
  line-oriented text files are split on arbitrary byte boundaries.
- Then a new MapTask is created per FileSplit.
+ Then a new !MapTask is created per !FileSplit.
  
  When an individual MapTask task starts it will open a new output
  writer per configured Reduce task. It will then proceed to read
@@ -29, +29 @@

  
  As key-value pairs are read from the RecordReader they are
  passed to the configured Mapper. The user supplied Mapper does
- whatever it wants with the input pair and calls	[http://lucene.apache.org/hadoop/docs/api/org/apache/hadoop/mapred/OutputCollector.html#collect(org.apache.hadoop.io.WritableComparable,%20org.apache.hadoop.io.Writable)
OutputCollectore.collect] with key-value pairs of its own choosing. The output it
+ whatever it wants with the input pair and calls	[http://lucene.apache.org/hadoop/docs/api/org/apache/hadoop/mapred/OutputCollector.html#collect(org.apache.hadoop.io.WritableComparable,%20org.apache.hadoop.io.Writable)
OutputCollector.collect] with key-value pairs of its own choosing. The output it
  generates must use one key class and one value class, because
  the Map output will be eventually written into a SequenceFile,
  which has per file type information and all the records must
@@ -78, +78 @@

  When a reduce task starts it will have its input scattered in
  several files possibly on several DFS nodes. If run in
  distributed mode these need to be first copied to the local
- filesystem in a ''copy phase'' (see [href="http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTaskRunner.java?view=markup
ReduceTaskRunner]).
+ filesystem in a ''copy phase'' (see [http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTaskRunner.java?view=markup
ReduceTaskRunner]).
  
  Once all the data is available locally it is appended to one
  file (''append phase''). The file is then merge sorted so that the key-value pairs for
  a given key are contiguous (''sort phase''). This makes the actual reduce operation simple:
the file is
  read sequentially and the values are passed to the reduce method
  with an iterator reading the input input file until the next key
- value is encountered. See [href="http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTask.java?view=markup
 ReduceTask] for details.
+ value is encountered. See [http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTask.java?view=markup
 ReduceTask] for details.
  
  In the end the output will consist of one output file per Reduce
  task run. The format of the files can be specified with

Mime
View raw message