hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6088) Add configurable maximum block count for datanode
Date Wed, 29 Oct 2014 15:09:35 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14188424#comment-14188424

Kihwal Lee commented on HDFS-6088:

There are other things that can make DN run out of memory very easily.  We will probably address
these issues in the next major release where many old dependencies can be updated.

> Add configurable maximum block count for datanode
> -------------------------------------------------
>                 Key: HDFS-6088
>                 URL: https://issues.apache.org/jira/browse/HDFS-6088
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
> Currently datanode resources are protected by the free space check and the balancer.
 But datanodes can run out of memory simply storing too many blocks. If the sizes of blocks
are small, datanodes will appear to have plenty of space to put more blocks.
> I propose adding a configurable max block count to datanode. Since datanodes can have
different heap configurations, it will make sense to make it datanode-level, rather than something
enforced by namenode.

This message was sent by Atlassian JIRA

View raw message