hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-2853) Add Writable for very large lists of key / value pairs
Date Fri, 07 Mar 2008 22:39:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12576424#action_12576424

Chris Douglas commented on HADOOP-2853:

It is even more efficient to use JobConf::setOutputValueGroupingComparator. If I'm reading
this correctly, you can define your map output comparator (JobConf::setOutputKeyComparatorClass)
to sort by host, then by tag (opposite the preceding suggestion), so that &lt;&lt;x,
host&gt;, hostStats&gt; is seen just before the &lt;&lt;y, uri&gt;, uriStat&gt;
values. This way, you don't need to slurp the &lt;host, hostStat&gt; entries into
a map. Again, if I'm understanding the semantics of setOutputValueGroupingComparator, one
would define your comparators such that your output key comparator would sort by host then
tag, but the grouping comparator would consider keys with the same host as equal (i.e. belonging
to the same reduce). In the reduce, your first entry (entries?) would be &lt;&lt;x,
host&gt;, hostStats&gt; and the ones that follow would be &lt;&lt;y, uri&gt;,
uriStat&gt;. I haven't tested this, but it might work.

> Add Writable for very large lists of key / value pairs
> ------------------------------------------------------
>                 Key: HADOOP-2853
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2853
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Andrzej Bialecki 
>             Fix For: 0.17.0
>         Attachments: sequenceWritable-v1.patch, sequenceWritable-v2.patch, sequenceWritable-v3.patch,
sequenceWritable-v4.patch, sequenceWritable-v5.patch
> Some map-reduce jobs need to aggregate and process very long lists as a single value.
This usually happens when keys from a large domain are mapped into a small domain, and their
associated values cannot be aggregated into few values but need to be preserved as members
of a large list. Currently this can be implemented as a MapWritable or ArrayWritable - however,
Hadoop needs to deserialize the current key and value completely into memory, which for extremely
large values causes frequent OOM exceptions. This also works only with lists of relatively
small size (e.g. 1000 records).
> This patch is an implementation of a Writable that can handle arbitrarily long lists.
Initially it keeps an internal buffer (which can be (de)-serialized in the ordinary way),
and if the list size exceeds certain threshold it is spilled to an external SequenceFile (hence
the name) on a configured FileSystem. The content of this Writable can be iterated, and the
data is pulled either from the internal buffer or from the external file in a transparent

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message