hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Binglin Chang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-2841) Task level native optimization
Date Sun, 28 Aug 2011 17:56:38 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13092524#comment-13092524

Binglin Chang commented on MAPREDUCE-2841:

bq. Can you summarize how the memory management works in the current patch?
KeyValue Buffer memory management in the current patch is very simple, it has three parts:

Hold the buffer of size io.sort.mb, and track current buffer usage
notice that this buffer will only occupy virtual memory not RSS(memory really used) if the
memory is not 
actually accessed, this is better than java because java initialize arrays.
Memory lazy allocation is a beautiful feature :)

Small chunk of memory block backed by MemoryPool, used by PartitionBucket
the default size of MemoryBlock = ceil(io.sort.mb / partition / 4 / MIN_BLOCK_SIZE) / MIN_BLOCK_SIZE
currently MIN_BLOCK_SIZE == 32K, it should be dynamically tuned according to partition number
& io.sort.mb
The purpose of MemoryBlock is to reduce CPU cache miss. When sorting large indirect addressed
KV pairs, 
I guess the sort time will be dominated by RAM random reads, so MemoryBlock is used to let
each bucket 
get relatively continous memory.

Store KV pairs for a partition, it has two arrays:
  vector<MemoryBlock *> blocks
    blocks used by this bucket
  vector<uint32_t> offsets 
    KV pair start offset in MemoryPool
    this vector is not under memory control(in io.sort.mb) yet, a bug needs to be fixed
      (use memory of MemoryPool, use MemoryBlock directly or move backward from buffer end)
    it uses less memory(1/3) than java kvindices, and use 1/2 of io.sort.mb memory at 
    most (when all k/v are empty), so it won't be much problem currently

Limitations of this approach:
Large partition number leads to small MemoryBlock
Large Key/Value can cause memory holes in small MemoryBlock

It's difficult to determine block size, since it relates to K/V size(like the old io.sort.record.percent),
200MB memory can only hold 12800 16K MemoryBlocks, so if average K/V size is a little bigger
than 8K, 
half of the memory will likely be wasted.
This approach will not work well when partition number & Key/Value size is large, but
this is rare case,
and it can be improved, just for example, we can use MemoryPool directly (disable MemoryBlock)
io.sort.mb/partiion number is too small

The other thing related to this is this approach only support simple synchronized collect/spill,
I think this
will not harm performance very much. 
Asynchronized collect/spill needs tuning of io.sort.spill.percent, and we can make sort&spill
really fast so 
parallel collect & spill is not so important as before, we can also let the original mapper
thread to do sort&spill 
by enabling parallel sort&spill.

> Task level native optimization
> ------------------------------
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>         Attachments: MAPREDUCE-2841.v1.patch, dualpivot-0.patch, dualpivotv20-0.patch
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs emitted by
mapper, therefore sort, spill, IFile serialization can all be done in native code, preliminary
test(on Xeon E5410, jdk6u24) showed promising results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware CRC32C is
used, things can get much faster(1G/s).
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if IdentityMapper(mapper
does nothing) is used.
> There are limitations of course, currently only Text and BytesWritable is supported,
and I have not think through many things right now, such as how to support map side combine.
I had some discussion with somebody familiar with hive, it seems that these limitations won't
be much problem for Hive to benefit from those optimizations, at least. Advices or discussions
about improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), which checks
if key/value type, comparator type, combiner are all compatible, then MapTask can choose to
enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better final results,
and I believe similar optimization can be adopt to reduce task and shuffle too. 

This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message