hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-6088) Add configurable maximum block count for datanode
Date Tue, 11 Mar 2014 16:37:43 GMT
Kihwal Lee created HDFS-6088:

             Summary: Add configurable maximum block count for datanode
                 Key: HDFS-6088
                 URL: https://issues.apache.org/jira/browse/HDFS-6088
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Kihwal Lee

Currently datanode resources are protected by the free space check and the balancer.  But
datanodes can run out of memory simply storing too many blocks. If the sizes of blocks are
small, datanodes will appear to have plenty of space to put more blocks.

I propose adding a configurable max block count to datanode. Since datanodes can have different
heap configurations, it will make sense to make it datanode-level, rather than something enforced
by namenode.

This message was sent by Atlassian JIRA

View raw message