flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-1320) Add an off-heap variant of the managed memory
Date Wed, 14 Jan 2015 14:11:35 GMT

    [ https://issues.apache.org/jira/browse/FLINK-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14276923#comment-14276923
] 

ASF GitHub Bot commented on FLINK-1320:
---------------------------------------

Github user mxm commented on the pull request:

    https://github.com/apache/flink/pull/290#issuecomment-69920591
  
    @uce Yes, it will be copied from off-heap to heap first. There are also other places like
the `EventSerializer` where this is the case. I guess it depends on the amount of data that
is being copied. If you want to operate on a byte array, then you have to copy it into the
JVM heap.
    
    @uce I ran the integration tests manually. We could add the random testing in a separate
pull request.


> Add an off-heap variant of the managed memory
> ---------------------------------------------
>
>                 Key: FLINK-1320
>                 URL: https://issues.apache.org/jira/browse/FLINK-1320
>             Project: Flink
>          Issue Type: Improvement
>          Components: Local Runtime
>            Reporter: Stephan Ewen
>            Priority: Minor
>
> For (nearly) all memory that Flink accumulates (in the form of sort buffers, hash tables,
caching), we use a special way of representing data serialized across a set of memory pages.
The big work lies in the way the algorithms are implemented to operate on pages, rather than
on objects.
> The core class for the memory is the {{MemorySegment}}, which has all methods to set
and get primitives values efficiently. It is a somewhat simpler (and faster) variant of a
HeapByteBuffer.
> As such, it should be straightforward to create a version where the memory segment is
not backed by a heap byte[], but by memory allocated outside the JVM, in a similar way as
the NIO DirectByteBuffers, or the Netty direct buffers do it.
> This may have multiple advantages:
>   - We reduce the size of the JVM heap (garbage collected) and the number and size of
long living alive objects. For large JVM sizes, this may improve performance quite a bit.
Utilmately, we would in many cases reduce JVM size to 1/3 to 1/2 and keep the remaining memory
outside the JVM.
>   - We save copies when we move memory pages to disk (spilling) or through the network
(shuffling / broadcasting / forward piping)
> The changes required to implement this are
>   - Add a {{UnmanagedMemorySegment}} that only stores the memory adress as a long, and
the segment size. It is initialized from a DirectByteBuffer.
>   - Allow the MemoryManager to allocate these MemorySegments, instead of the current
ones.
>   - Make sure that the startup script pick up the mode and configure the heap size and
the max direct memory properly.
> Since the MemorySegment is probably the most performance critical class in Flink, we
must take care that we do this right. The following are critical considerations:
>   - If we want both solutions (heap and off-heap) to exist side-by-side (configurable),
we must make the base MemorySegment abstract and implement two versions (heap and off-heap).
>   - To get the best performance, we need to make sure that only one class gets loaded
(or at least ever used), to ensure optimal JIT de-virtualization and inlining.
>   - We should carefully measure the performance of both variants. From previous micro
benchmarks, I remember that individual byte accesses in DirectByteBuffers (off-heap) were
slightly slower than on-heap, any larger accesses were equally good or slightly better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message