hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1462) Enable context-specific and stateful serializers in MapReduce
Date Fri, 05 Feb 2010 17:51:28 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12830196#action_12830196
] 

Tom White commented on MAPREDUCE-1462:
--------------------------------------

Owen, thanks for posting your design. I've reproduced my comments on the design which I made
on MAPREDUCE-1126 here for convenience:

* The changes to the serialization API are not backwards compatible, so a new package of serializer
types would need creating. Is this really necessary to achieve Avro integration?
* I'm not sure why we need to serialize serializations. The patch in MAPREDUCE-1126 avoids
the need for this by using a simple string mechanism for configuration. Having an opaque binary
format also makes it difficult to retrieve and use the serialization from other languages
(e.g. C++ or other Pipes languages). My latest patch on MAPREDUCE-1126 is language-neutral
in this regard.
* Adding a side file for the context-serializer mapping complicates the implementation. It's
not clear what container file would be used for the side file (Avro container, custom?). I
understand that putting framework configuration in the job configuration may not be desirable,
but it has been done in the past so I don't know why it is being ruled out here. I would rather
have a separate effort (and discussion) to create a "private" job configuration (not accessible
by user code) for such configuration (above and beyond the configuration needed for serialization).
* The user API is no shorter than the one proposed in MAPREDUCE-1126. Compare:
{code}
Schema keySchema = ...
AvroGenericSerialization serialization = new AvroGenericSerialization();
serialization.setSchema(keySchema);
job.set(SerializationContext.MAP_OUTPUT_KEY, serialization);
{code}
with
{code}
Schema keySchema = ...
AvroGenericData.setMapOutputKeySchema(job, keySchema);
{code}


> Enable context-specific and stateful serializers in MapReduce
> -------------------------------------------------------------
>
>                 Key: MAPREDUCE-1462
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1462
>             Project: Hadoop Map/Reduce
>          Issue Type: New Feature
>          Components: task
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>         Attachments: h-1462.patch
>
>
> Although the current serializer framework is powerful, within the context of a job it
is limited to picking a single serializer for a given class. Additionally, Avro generic serialization
can make use of additional configuration/state such as the schema. (Most other serialization
frameworks including Writable, Jute/Record IO, Thrift, Avro Specific, and Protocol Buffers
only need the object's class name to deserialize the object.)
> With the goal of keeping the easy things easy and maintaining backwards compatibility,
we should be able to allow applications to use context specific (eg. map output key) serializers
in addition to the current type based ones that handle the majority of the cases. Furthermore,
we should be able to support serializer specific configuration/metadata in a type safe manor
without cluttering up the base API with a lot of new methods that will confuse new users.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message