hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alejandro Abdelnur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3702) add support for chaining Maps in a single Map and after a Reduce [M*/RM*]
Date Thu, 28 Aug 2008 05:22:44 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12626431#action_12626431
] 

Alejandro Abdelnur commented on HADOOP-3702:
--------------------------------------------

[apologies for the delay following up on this, I was off all last week]

*On using generics*

Enis, I don't think the use of generics in your proposed patch is correct. Let me try to explain.

_First reason:_

The intended normal use of {{ChainMapper}} and {{ChainReducer}} is via configuration, i.e.:

{code}
  Jobconf conf = ...
  ChainMapper.addMapper(conf, ...);
  ChainMapper.addMapper(conf, ...);
  ChainMapper.addMapper(conf, ...);
  ...
{code}

Making {{ChainMapper<K1, V1, K2, V2>}} does not make sense in this case as a developer
is never instantiating it. Thus there is not type checking being done at compilation here
for {{K1, V1, K2, V2}} .

_Second reason:_

Even if you would do create an instance of {{ChainMapper}} bound to concrete classes for {{K1,
V1, K2, V2}} it would work for the common case of mappers in the chain using different key/value
classes. 

See, the contract between the maps in a chain is that the input key/values classes of the
first mapper are the same as the input key/values classes of the job, the input key/values
classes of the second mapper are the same as the output key/values classes of the first mapper,
and so on, and the output key/value classes of the last mapper in the chain (for the {{ChainMapper}})
are the same as the input key/values classes of the reducer.

For example: take a job that the map input/output classes are {{K1,V1,K2,V2}} you can have
the following chain:

{code}
  Jobconf conf = ...
  ChainMapper.addMapper(conf, AMap.class, K1.class, V1.class, Ka.class, Va.class, null);
  ChainMapper.addMapper(conf, BMap.class, Ka.class, Va.class, Kb.class, Vb.class, null);
  ChainMapper.addMapper(conf, CMap.class, Kb.class, Vb.class, K2.class, V2.class, null);
{code}

*On using a {{Serializer}} for {{Configuration}}*

Note that the {{Serializer}} is for {{Configuration}} and subclasses, it is not bound to {{Configuration}}.

I'm OK with your proposed patch here.


> add support for chaining Maps in a single Map and after a Reduce [M*/RM*]
> -------------------------------------------------------------------------
>
>                 Key: HADOOP-3702
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3702
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: mapred
>         Environment: all
>            Reporter: Alejandro Abdelnur
>            Assignee: Alejandro Abdelnur
>         Attachments: Hadoop-3702.patch, patch3702.txt, patch3702.txt, patch3702.txt,
patch3702.txt, patch3702.txt, patch3702.txt, patch3702.txt, patch3702.txt, patch3702.txt,
patch3702.txt
>
>
> On the same input, we usually need to run multiple Maps one after the other without no
Reduce. We also have to run multiple Maps after the Reduce.
> If all pre-Reduce Maps are chained together and run as a single Map a significant amount
of Disk I/O will be avoided. 
> Similarly all post-Reduce Maps can be chained together and run in the Reduce phase after
the Reduce.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message