hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Steve Loughran (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-559) Work out the memory consumption of NN artifacts on a compressed pointer JVM
Date Mon, 24 Aug 2009 18:59:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12747019#action_12747019
] 

Steve Loughran commented on HDFS-559:
-------------------------------------

numbers of Java6u14 on 32-bit server VM

sizeof(BlockInfo) = 40
sizeof(INode) = 56
sizeof(INodeDirectory) = 48
sizeof(INodeDirectorywithQuota) = 80
sizeof(DatanodeDescriptor) =120

This uses the instrumentation API; I havent added blocks underneath


> Work out the memory consumption of NN artifacts on a compressed pointer JVM
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-559
>                 URL: https://issues.apache.org/jira/browse/HDFS-559
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: name-node
>    Affects Versions: 0.21.0
>         Environment: 64-bit and 32 bit JVMs, Java6u14 and jdk7 betas, with -XX compressed
oops enabled/disabled
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>
> Following up HADOOP-1687, it would be nice to know the size of datatypes in under the
java16u14 JVM, which offers compressed pointers.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message