hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6088) Add configurable maximum block count for datanode
Date Fri, 14 Mar 2014 21:01:03 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13935638#comment-13935638

Kihwal Lee commented on HDFS-6088:

bq. Would be nice to avoid having yet another config that users have to set.
I agree.

I was looking at the heap usage of a DN. It looks like the heap usage has dropped considerably
since we moved to use GSet for block map. So much so that the automatically defined GSet capacity
doesn't seem to be sufficient. For example, I brought up a DN with about 62K blocks with the
max heap set to 1GB.  The GSet was created for 524,288 entries.  

Looking at the heap usage, each block takes up about 315 bytes. Other parts take up less than
50MB. In any case, 315 * 524288 = 157MB.  Even if other parts take up more than expected,
the node can easily store 4X of this. But storing 2M entries in the small GSet is not ideal.

> Add configurable maximum block count for datanode
> -------------------------------------------------
>                 Key: HDFS-6088
>                 URL: https://issues.apache.org/jira/browse/HDFS-6088
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
> Currently datanode resources are protected by the free space check and the balancer.
 But datanodes can run out of memory simply storing too many blocks. If the sizes of blocks
are small, datanodes will appear to have plenty of space to put more blocks.
> I propose adding a configurable max block count to datanode. Since datanodes can have
different heap configurations, it will make sense to make it datanode-level, rather than something
enforced by namenode.

This message was sent by Atlassian JIRA

View raw message