hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2841) Task level native optimization
Date Tue, 30 Aug 2011 22:13:13 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13094154#comment-13094154

Chris Douglas commented on MAPREDUCE-2841:

bq. we are trying to evaluate and compare the c++ impl in HCE (and also this jira) and doing
a pure java re-impl. So the thing that we mostly cared about is that is there sth that the
c++ impl can do and a java re-impl can not. And if there is, we need to find out how much
is that difference. And from there we can have a better understand of each approach and decide
which approach to go.

Sorry, that's what I was trying to answer. A system matching your description existed in 0.16
and tests of the current collector show it to be faster for non-degenerate cases and far more
predictable. The bucketed model inherently has some internal fragmentation which can only
be eliminated by using expensive buffer copies and compactions or by using per-record byte
arrays, where the 8 byte object overhead exceeds the cost of tracking the partition, requiring
only 4 bytes. Eliminating that overhead is impractical, but even mitigating it (e.g. allowing
partitions to share slabs) requires that one implement an allocation and memory management
system across Java byte arrays or ByteBuffers, themselves allocated by the JVM. I would expect
that system to be easier to write and maintain than even the current impl, but not trivial
if it supports all of the existing use cases and semantics. Unlike the C++ impl (and like
the current one), abstractions will likely be sacrificed to avoid the overheads.

> Task level native optimization
> ------------------------------
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>         Attachments: MAPREDUCE-2841.v1.patch, MAPREDUCE-2841.v2.patch, dualpivot-0.patch,
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs emitted by
mapper, therefore sort, spill, IFile serialization can all be done in native code, preliminary
test(on Xeon E5410, jdk6u24) showed promising results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware CRC32C is
used, things can get much faster(1G/s).
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if IdentityMapper(mapper
does nothing) is used.
> There are limitations of course, currently only Text and BytesWritable is supported,
and I have not think through many things right now, such as how to support map side combine.
I had some discussion with somebody familiar with hive, it seems that these limitations won't
be much problem for Hive to benefit from those optimizations, at least. Advices or discussions
about improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), which checks
if key/value type, comparator type, combiner are all compatible, then MapTask can choose to
enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better final results,
and I believe similar optimization can be adopt to reduce task and shuffle too. 

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message