hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tom White (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6685) Change the generic serialization framework API to use serialization-specific bytes instead of Map<String,String> for configuration
Date Mon, 22 Nov 2010 19:31:22 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12934566#action_12934566

Tom White commented on HADOOP-6685:

I have two serious issues with the current patch, which I have mentioned above. However, given
that they have not been adequately addressed I feel I have no option but to vote -1.

The first is that no change is needed in SequenceFile unless we want to support Avro, but,
given that Avro data files were designed for this, and are multi-lingual, why change the SequenceFile
format solely to support Avro? Are Avro data files insufficient? Note that Thrift and Protocol
Buffers can be stored in today's SequenceFiles.

The second is that this patch adds new serializations which introduce into the core a new
dependency on a particular version of each of Avro, Thrift, and PB, in a non-pluggable way.

This type of dependency is qualitatively different to other dependencies. Hadoop depends on
log4j for instance, so if a user's code does too, then it needs to use the same version. A
recent JIRA made it possible to specify a different version of log4j in the job, but this
only works if the version the user specifies is compatible with *both* their code and the
Hadoop kernel code.

However, in the case of a PB serialization, for example, the PB library is not used in Hadoop
except in the serialization code for serializing the user's data type. So it's a user-level
concern, and should be compiled as such - putting it in core Hadoop is asking for trouble
in the future, since the Hadoop releases won't keep track with the union of PB, Thrift, and
Avro releases. These serialization plugins should be stand alone, or at least easily re-compilable
in a way that doesn't involve recompiling all of Hadoop, such as a contrib module. The user
just treats the plugin JAR as another code dependency.

To move forward on this issue it's clear that compromise is needed. I actually prefer strings
in serialization (HADOOP-6420), but am prepared to compromise over it, in the interests of
finding consensus.

> Change the generic serialization framework API to use serialization-specific bytes instead
of Map<String,String> for configuration
> ----------------------------------------------------------------------------------------------------------------------------------
>                 Key: HADOOP-6685
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6685
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Owen O'Malley
>            Assignee: Owen O'Malley
>             Fix For: 0.22.0
>         Attachments: libthrift.jar, serial.patch, serial4.patch, serial6.patch, serial7.patch,
> Currently, the generic serialization framework uses Map<String,String> for the
serialization specific configuration. Since this data is really internal to the specific serialization,
I think we should change it to be an opaque binary blob. This will simplify the interface
for defining specific serializations for different contexts (MAPREDUCE-1462). It will also
move us toward having serialized objects for Mappers, Reducers, etc (MAPREDUCE-1183).

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message