hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3149) supporting multiple outputs for M/R jobs
Date Thu, 03 Apr 2008 03:52:24 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12584975#action_12584975

Alejandro Abdelnur commented on HADOOP-3149:

findbugs, fixed one, the second one  IS is a false positive (explained that in previous comment)

fixing checksytle warnings in next version of patch.

* It is not always desirable the OutputFormat/Key/Value classes of all outputs to be the same,
we have several use cases where they are different (I've point this as a limitation of the
Runping's patch).
* Limiting the names to [a-zA-Z0-9] has a purpose, the names are used to create the files
under the output dir, you don't want to get funny characters in the leafname.

Now  I understand what you have in mind with the KeyValue object and not having MultipleOutput*
classes. I have the following comments on that:

# first bullet item above.
# By piggybacking on the current collector to write to multiple outputs, while you avoid introducing
new classes, is not obvious and I would say confusing for the developer, I'd rather have explicit
methods/classes for handling multiple outputs.

I would rather refactor code to:

* get rid of the {{MultipleOutput Task/Map/Reduce/Collector}} classes.
* Don't introduce a {{KeyValue}} class.

Add a {{ collect(String namedOutput, WritableComparable key, Writable value) }} method to
the MultipleOutputs and the usage pattern would be

public class MyReducer implements Reducer {
  private MultipleOutputs mos;

  public void configure(JobConf conf) {
    mos = new MultipleOutputs(conf);

  public void reduce(WritableComparable key, Iterator<Writable> values, OutputCollector
collector, Reporter reporter) throws IOException {
    Writable value = values.next();
    mos.collect("aa", key, value);

  public void close() throws IOException {


configuration of the job prior to dispatching would remain the same.

> supporting multiple outputs for M/R jobs
> ----------------------------------------
>                 Key: HADOOP-3149
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3149
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>             Fix For: 0.17.0
>         Attachments: patch3149.txt, patch3149.txt, patch3149.txt
> The outputcollector supports writing data to a single output, the 'part' files in the
output path.
> We found quite common that our M/R jobs have to write data to different output. For example
when classifying data as NEW, UPDATE, DELETE, NO-CHANGE to later do different processing on
> Handling the initialization of additional outputs from within the M/R code complicates
the code and is counter intuitive with the notion of job configuration.
> It would be desirable to:
> # Configure the additional outputs in the jobconf, potentially specifying different outputformats,
key and value classes for each one.
> # Write to the additional outputs in a similar way as data is written to the outputcollector.
> # Support the speculative execution semantics for the output files, only visible in the
final output for promoted tasks.
> To support multiple outputs the following classes would be added to mapred/lib:
> * {{MOJobConf}} : extends {{JobConf}} adding methods to define named outputs (name, outputformat,
key class, value class)
> * {{MOOutputCollector}} : extends {{OutputCollector}} adding a {{collect(String outputName,
WritableComparable key, Writable value)}} method.
> * {{MOMapper}} and {{MOReducer}}: implement {{Mapper}} and {{Reducer}} adding a new {{configure}},
{{map}} and {{reduce}} signature that take the corresponding {{MO}} classes and performs the
proper initialization.
> The data flow behavior would be: key/values written to the default (unnamed) output (using
the original OutputCollector {{collect}} signature) take part of the shuffle/sort/reduce processing
phases. key/values written to a named output from within a map don't.
> The named output files would be named using the task type and task ID to avoid collision
among tasks (i.e. 'new-m-00002' and 'new-r-00001').
> Together with the setInputPathFilter feature introduced by HADOOP-2055 it would become
very easy to chain jobs working on particular named outputs within a single directory.
> We are using heavily this pattern and it greatly simplified our M/R code as well as chaining
different M/R. 
> We wanted to contribute this back to Hadoop as we think is a generic feature many could
benefit from.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message