hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Todd Lipcon (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2841) Task level native optimization
Date Thu, 28 Aug 2014 00:28:04 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14113114#comment-14113114
] 

Todd Lipcon commented on MAPREDUCE-2841:
----------------------------------------

I've uploaded a benchmark tool to github here:
https://github.com/toddlipcon/mr-collector-benchmark

As a quick summary of the results, the current output collector typically takes 6.6sec of
CPU and 9 seconds of wall time to sort 250MB of terasort-style input data. The native implementation
takes 1.7sec of CPU time and 3-5 seconds of wall time. So, as originally discussed at the
top of this JIRA, there is a big CPU advantage to native sorting - across a terasort we can
expect to save many thousands of seconds of CPU time, which should translate either to faster
wall clock for the job, or to better concurrency on the cluster.

I'll write up a more thorough summary including results for Facebook's collector implementation,
and some explanation of _why_ it's faster. I'm also planning to deploy this on a cluster soon
and get some actual results on a terasort.

> Task level native optimization
> ------------------------------
>
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux/Unix
>            Reporter: Binglin Chang
>            Assignee: Sean Zhong
>         Attachments: DESIGN.html, MAPREDUCE-2841.v1.patch, MAPREDUCE-2841.v2.patch, dualpivot-0.patch,
dualpivotv20-0.patch, fb-shuffle.patch, hadoop-3.0-mapreduce-2841-2014-7-17.patch
>
>
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs emitted by
mapper, therefore sort, spill, IFile serialization can all be done in native code, preliminary
test(on Xeon E5410, jdk6u24) showed promising results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware CRC32C is
used, things can get much faster(1G/
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if IdentityMapper(mapper
does nothing) is used
> There are limitations of course, currently only Text and BytesWritable is supported,
and I have not think through many things right now, such as how to support map side combine.
I had some discussion with somebody familiar with hive, it seems that these limitations won't
be much problem for Hive to benefit from those optimizations, at least. Advices or discussions
about improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), which checks
if key/value type, comparator type, combiner are all compatible, then MapTask can choose to
enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better final results,
and I believe similar optimization can be adopt to reduce task and shuffle too. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message