hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-2919) Create fewer copies of buffer data during sort/spill
Date Mon, 10 Mar 2008 07:03:46 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-2919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Chris Douglas updated HADOOP-2919:

    Attachment: 2919-2.patch

This patch makes some minor performance improvements, adds documentation, and correctly effects
record compression in-place.

The following should probably be implemented as separate JIRAs:
* QuickSort would benefit from the optimization whereby keys equal to the pivot are swapped
into place at the end of a pass.
* Instead of recreating the spill thread, a persistent thread should accept spill events.
This will permit one to set the spill threshold to less than 50% and avoid the overhead of
creating a thread (assumed to be slight relative to the cost of a spill, but worth eliminating).
* Recreating collectors is expensive. Pooling resources- particularly the collection buffers-
between jobs (once JVM reuse is in place) should make a significant difference for jobs with
short-running maps.

> Create fewer copies of buffer data during sort/spill
> ----------------------------------------------------
>                 Key: HADOOP-2919
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2919
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: mapred
>            Reporter: Chris Douglas
>            Assignee: Chris Douglas
>             Fix For: 0.17.0
>         Attachments: 2919-0.patch, 2919-1.patch, 2919-2.patch
> Currently, the sort/spill works as follows:
> Let r be the number of partitions
> For each call to collect(K,V) from map:
> * If buffers do not exist, allocate a new DataOutputBuffer to collect K,V bytes, allocate
r buffers for collecting K,V offsets
> * Write K,V into buffer, noting offsets
> * Register offsets with associated partition buffer, allocating/copying accounting buffers
if nesc
> * Calculate the total mem usage for buffer and all partition collectors by iterating
over the collectors
> * If total mem usage is greater than half of io.sort.mb, then start a new thread to spill,
blocking if another spill is in progress
> For each spill (assuming no combiner):
> * Save references to our K,V byte buffer and accounting data, setting the former to null
(will be recreated on the next call to collect(K,V))
> * Open a SequenceFile.Writer for this partition
> * Sort each partition separately (the current version of sort reuses, but still requires
wrapping, indices in IntWritable objects)
> * Build a RawKeyValueIterator of sorted data for the partition
> * Deserialize each key and value and call SequenceFile::append(K,V) on the writer for
this partition
> There are a number of opportunities for reducing the number of copies, creations, and
operations we perform in this stage, particularly since growing many of the buffers involved
requires that we copy the existing data to the newly sized allocation.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message