hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2841) Task level native optimization
Date Mon, 29 Aug 2011 19:23:38 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13093121#comment-13093121
] 

Chris Douglas commented on MAPREDUCE-2841:
------------------------------------------

{quote}I agree. How to contribute this to hadoop? Add a new subdirectory in contrib like streaming,
or merge to native, or stay in current c++/libnativetask?
It contains both c++ and java code, and will likely to add client tools like streaming, and
dev SDK.{quote}

To pair the java/c++ code, a contrib module could make sense. Client tools and dev libraries
are distant goals, though.

Contributing it to the 0.20 branch is admissible, but suboptimal. Most of the releases generated
for that series are sustaining releases. While it's possible to propose a new release branch
with these improvements, releasing it would be difficult. Targeting trunk would be the best
approach, if you can port your code.

{quote}we are also evaluating the approach of optimizing the existing Hadoop Java map side
sort algorithms (like playing the same set of tricks used in this c++ impl: bucket sort, prefix
key comparison, a better crc32 etc).

The main problem we are interested is how big is the memory problem for the java impl.{quote}

Memory _is_ the problem. The bucketed sort used from 0.10(?) to 0.16 had more internal fragmentation
and a less predictable memory footprint (particularly for jobs with lots of reducers). Subsequent
implementations focused on reducing the number of spills for each task, because the cost of
spilling dominated the cost of the sort. Even with a significant speedup in the sort step,
avoiding a merge by managing memory more carefully usually effects faster task times. Merging
from fewer files also decreases the chance of failure and reduces seeks across all drives
(by spreading output over fewer disks). A precise memory footprint also helped application
authors calculate the framework overhead (both memory and number of spills) from the map output
size without considering the number of reducers.

That said, jobs matching particular profiles admit far more aggressive optimization, particularly
if some of the use cases are ignored. Records larger than the sort buffer, user-defined comparators
(particularly on deserialized objects), the combiner, and the intermediate data format restrict
the solution space and complicate implementations. There's certainly fat to be trimmed from
the general implementation, but restricting the problem will admit far more streamlined solutions
than identifying and branching on all the special cases.

> Task level native optimization
> ------------------------------
>
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>         Attachments: MAPREDUCE-2841.v1.patch, dualpivot-0.patch, dualpivotv20-0.patch
>
>
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs emitted by
mapper, therefore sort, spill, IFile serialization can all be done in native code, preliminary
test(on Xeon E5410, jdk6u24) showed promising results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware CRC32C is
used, things can get much faster(1G/s).
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if IdentityMapper(mapper
does nothing) is used.
> There are limitations of course, currently only Text and BytesWritable is supported,
and I have not think through many things right now, such as how to support map side combine.
I had some discussion with somebody familiar with hive, it seems that these limitations won't
be much problem for Hive to benefit from those optimizations, at least. Advices or discussions
about improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), which checks
if key/value type, comparator type, combiner are all compatible, then MapTask can choose to
enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better final results,
and I believe similar optimization can be adopt to reduce task and shuffle too. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message