hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Joydeep Sen Sarma (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2841) Task level native optimization
Date Fri, 29 Aug 2014 04:28:12 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14114834#comment-14114834

Joydeep Sen Sarma commented on MAPREDUCE-2841:

chiming in, I tried Todd's benchmark on the FB blockoutputbuffer - from an internal email:

"i ran a benchmark that Todd Lipcon had posted which sorts 2.5M records of 100 bytes each
(10 byte key, 90 byte value) distributed evenly across 100 partitions. Took the average of
3 runs after one warmup run (all in same JVM). 
- Old Collector: 20.3s
- New Collector: 7.48s

very interested in this work. We are going to enable FB's output collector by default in Qubole.
I have done some tests on TPCH queries. It doesn't make a difference in all queries - but
sometimes it does significantly, sample queries from BMOB:

           Regular    BMOB
q05   544     484
q01   -- no change -- (94)
q02   175     166
q03   -- no change --  (too much variance but approx 256)

One thing - I think query latency is absolutely the wrong benchmark for measuring the utility
of these optimizations. The problem is Hive runtime (for example) is dominated by startup
and launch overheads for these types of queries. But in a CPU/throughput bound cluster - the
improvements would matter much more than straight line query latency improvements would indicate.

> Task level native optimization
> ------------------------------
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux/Unix
>            Reporter: Binglin Chang
>            Assignee: Sean Zhong
>         Attachments: DESIGN.html, MAPREDUCE-2841.v1.patch, MAPREDUCE-2841.v2.patch, dualpivot-0.patch,
dualpivotv20-0.patch, fb-shuffle.patch, hadoop-3.0-mapreduce-2841-2014-7-17.patch
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs emitted by
mapper, therefore sort, spill, IFile serialization can all be done in native code, preliminary
test(on Xeon E5410, jdk6u24) showed promising results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware CRC32C is
used, things can get much faster(1G/
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if IdentityMapper(mapper
does nothing) is used
> There are limitations of course, currently only Text and BytesWritable is supported,
and I have not think through many things right now, such as how to support map side combine.
I had some discussion with somebody familiar with hive, it seems that these limitations won't
be much problem for Hive to benefit from those optimizations, at least. Advices or discussions
about improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), which checks
if key/value type, comparator type, combiner are all compatible, then MapTask can choose to
enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better final results,
and I believe similar optimization can be adopt to reduce task and shuffle too. 

This message was sent by Atlassian JIRA

View raw message