hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Inigo Goiri (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11257) Evacuate DN when the remaining is negative
Date Fri, 16 Dec 2016 19:55:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15755365#comment-15755365

Inigo Goiri commented on HDFS-11257:

The proposal would be for the {{BlockManager}} to check for this situation and leverage the
code in {{blockHasEnoughRacks()}} to mark blocks as needing replicas in other nodes. Once
that's done, the block placement policy would mark the blocks in machine with {{getRemaining()<0}}
for deletion.

> Evacuate DN when the remaining is negative
> ------------------------------------------
>                 Key: HDFS-11257
>                 URL: https://issues.apache.org/jira/browse/HDFS-11257
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.7.3
>            Reporter: Inigo Goiri
> Datanodes have a maximum amount of disk they can use. This is set using {{dfs.datanode.du.reserved}}.
For example, if we have a 1TB disk and we set the reserved to 100GB, the DN can only use ~900GB.
However, if we fill the DN and later other processes (e.g., logs or co-located services) start
to use the disk space, the remaining space will go to a negative and the used storage >100%.
> The Rebalancer or decommissioning would cover this situation. However, both approaches
require administrator intervention while this is a situation that violates the settings. Note
that decommisioning, would be too extreme as it would evacuate all the data.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message