hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager
Date Fri, 16 Jan 2015 01:21:35 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Andrew Wang updated HDFS-7411:
    Attachment: hdfs-7411.007.patch

Here's a new patch addressing Colin's review feedback:

* Got rid of the nodes.per.interval config key and instead use blocks.per.interval. I didn't
bother with a DeprecationDelta because the keys are semantically different and can't just
be aliased. Not sure if there's more to do here.
* Added support for "0 means unlimited" for the max.concurrent.tracked.nodes property
* Fixed typos, spacing, added additional debug logging requested. The "every 30 minute print"
since we print out the first insufficiently block encountered while checking each node.
* Fixed a small OBO in the rate limiting, and added a test for the limiting

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, hdfs-7411.003.patch, hdfs-7411.004.patch,
hdfs-7411.005.patch, hdfs-7411.006.patch, hdfs-7411.007.patch
> Would be nice to split out decommission logic from DatanodeManager to DecommissionManager.

This message was sent by Atlassian JIRA

View raw message