hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kihwal Lee (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-8041) Consider remaining space during block blockplacement if dfs space is highly utilized
Date Tue, 11 Oct 2016 13:07:21 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Kihwal Lee resolved HDFS-8041.
    Resolution: Won't Fix

While this jira makes balancing more aggressive when free space is low, there are corner cases
where it can perform poorly.  AvailableSpaceBlockPlacementPolicy might be a better tool. 
Closing, won't fix.

> Consider remaining space during block blockplacement if dfs space is highly utilized
> ------------------------------------------------------------------------------------
>                 Key: HDFS-8041
>                 URL: https://issues.apache.org/jira/browse/HDFS-8041
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>              Labels: BlockPlacementPolicy
>         Attachments: HDFS-8041.v1.patch, HDFS-8041.v2.patch, HDFS-8041.v3.patch, HDFS-8041.v4.patch
> This feature is helpful in avoiding smaller nodes (i.e. heterogeneous environment) getting
constantly being full when the overall space utilization is over a certain threshold.  When
the utilization is low, balancer can keep up, but once the average per-node byte goes over
the capacity of the smaller nodes, they get full so quickly even after perfect balance.
> This jira proposes an improvement that can be optionally enabled in order to slow down
the rate of space usage growth of smaller nodes if the overall storage utilization is over
a configured threshold.  It will not replace balancer, rather will help balancer keep up.
Also, the primary replica placement will not be affected. Only the replicas typically placed
in a remote rack will be subject to this check.
> The appropriate threshold is cluster configuration specific. There is no generally good
value to set, thus it is disabled by default. We have seen cases where the threshold of 85%
- 90% would help. Figuring when {{totalSpaceUsed / numNodes}} becomes close to the capacity
of a smaller node is helpful in determining the threshold.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org

View raw message