hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-6088) Add configurable maximum block count for datanode
Date Wed, 29 Oct 2014 15:07:34 GMT

     [ https://issues.apache.org/jira/browse/HDFS-6088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Kihwal Lee resolved HDFS-6088.
------------------------------
    Resolution: Won't Fix
      Assignee: Kihwal Lee

> Add configurable maximum block count for datanode
> -------------------------------------------------
>
>                 Key: HDFS-6088
>                 URL: https://issues.apache.org/jira/browse/HDFS-6088
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>
> Currently datanode resources are protected by the free space check and the balancer.
 But datanodes can run out of memory simply storing too many blocks. If the sizes of blocks
are small, datanodes will appear to have plenty of space to put more blocks.
> I propose adding a configurable max block count to datanode. Since datanodes can have
different heap configurations, it will make sense to make it datanode-level, rather than something
enforced by namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message