hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization
Date Wed, 23 Jul 2014 19:29:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14072197#comment-14072197

Colin Patrick McCabe commented on HDFS-6709:

If Unsafe is removed, then we'll work around it the same way we work around lack of symlink
or hardlink support, missing error information from mkdir, etc.  As you can see in this patch,
we don't need Unsafe, we just use it because it's faster.  I would assume that if Unsafe is
removed, there will be work on improving DirectByteBuffer and JNI performance or putting in
place other alternate APIs that allow Java to function effectively on the server.  Otherwise,
the future of the platform doesn't look good.  Even Haskell has an Unsafe package.

bq. How you do envision off-heaping triplets in conjunction with those collections? Linked
list entries cost 48 bytes on a 64-bit jvm. A hash table entry costs 52 bytes. I know your
goal is reduced GC while ours is reduced memory usage, so it'll be unacceptable if an off-heap
implementation consumes even more memory - which incidentally will require GC and may cancel
any off-heap benefit? And/or cause a performance degradation.

With off-heap objects, the sizes can be whatever we want.  I think a basic linked list entry
would be 16 bytes (two 8-byte prev and next pointers), plus the size of the payload.  A hash
table entry has no real minimum size, since again, it's just a memory region that contains
whatever we want.  We will be able to do a lot better than the JVM because of a few things:
* the jvm must store runtime type information (RTTI) for each object, and we won't
* the 64-bit jvm usually aligns to 8 bytes, but we don't have to
* we don't have to implement a "lock bit," or any of that
* we can use value types, and current JVMs can't (although future ones will be able to)
* the JVM doesn't know that you will create 1 million of an object; it just creates a generic
object layout that must balance access speed and object size.  Since we know, we can be more

> Implement off-heap data structures for NameNode and other HDFS memory optimization
> ----------------------------------------------------------------------------------
>                 Key: HDFS-6709
>                 URL: https://issues.apache.org/jira/browse/HDFS-6709
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-6709.001.patch
> We should investigate implementing off-heap data structures for NameNode and other HDFS
memory optimization.  These data structures could reduce latency by avoiding the long GC times
that occur with large Java heaps.  We could also avoid per-object memory overheads and control
memory layout a little bit better.  This also would allow us to use the JVM's "compressed
oops" optimization even with really large namespaces, if we could get the Java heap below
32 GB for those cases.  This would provide another performance and memory efficiency boost.

This message was sent by Atlassian JIRA

View raw message