hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kai Zheng (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6709) Implement off-heap data structures for NameNode and other HDFS memory optimization
Date Fri, 25 Jul 2014 03:57:39 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14074027#comment-14074027
] 

Kai Zheng commented on HDFS-6709:
---------------------------------

bq.  Sadly, while investigating off heap performance last fall, I found this article that
claims off-heap reads via a DirectByteBuffer have horrible performance
I just took a look at the post. Yes it claimed DirectByteBuffer has the same great write performance
with Unsafe, but the read performance is horrible. Why, the reason isn't clear yet. Look at
the following code from JRE, there seems to be no big difference between read and write in
DirectByteBuffer:
{code}
public byte get() {
        return ((unsafe.getByte(ix(nextGetIndex()))));
    }
{code}
{code}
public ByteBuffer put(byte x) {
    unsafe.putByte(ix(nextPutIndex()), ((x)));
    return this;
}
{code}
Questions here: 1) why read performs so bad than write if it's true? 2) Is it true that simply
adding the index check would cause big performance loss? 
Some tests would be needed to make sure DirectByteBuffer is good enough meeting the needs
here even in performance consideration, and the performance should be compared apple to apple
exactly in the cases here.

> Implement off-heap data structures for NameNode and other HDFS memory optimization
> ----------------------------------------------------------------------------------
>
>                 Key: HDFS-6709
>                 URL: https://issues.apache.org/jira/browse/HDFS-6709
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-6709.001.patch
>
>
> We should investigate implementing off-heap data structures for NameNode and other HDFS
memory optimization.  These data structures could reduce latency by avoiding the long GC times
that occur with large Java heaps.  We could also avoid per-object memory overheads and control
memory layout a little bit better.  This also would allow us to use the JVM's "compressed
oops" optimization even with really large namespaces, if we could get the Java heap below
32 GB for those cases.  This would provide another performance and memory efficiency boost.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message