hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-7411) Refactor and improve decommissioning logic into DecommissionManager
Date Fri, 12 Dec 2014 02:05:15 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Andrew Wang updated HDFS-7411:
    Attachment: hdfs-7411.004.patch

Thanks Ming for reviewing. I attached a new patch which hopefully addresses your feedback.
I added a new config param which limits the number of nodes that can be decom'd at once. Above
the limit, nodes sit in a queue.

I also found and fixed what I think is a bug during block report processing of UC files. In
the new unit test I added, I had blocks that would show up in the blocksMap with replicas,
but the blocks wouldn't show up on each DN's block list. [~arpitagarwal] could you take a
quick look at the one-line addition in BlockManager#addStoredBlockUnderConstruction?

I also wonder if the same issue exists for hsync(SyncFlag.UPDATE_LENGTH), but I didn't investigate

> Refactor and improve decommissioning logic into DecommissionManager
> -------------------------------------------------------------------
>                 Key: HDFS-7411
>                 URL: https://issues.apache.org/jira/browse/HDFS-7411
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.5.1
>            Reporter: Andrew Wang
>            Assignee: Andrew Wang
>         Attachments: hdfs-7411.001.patch, hdfs-7411.002.patch, hdfs-7411.003.patch, hdfs-7411.004.patch
> Would be nice to split out decommission logic from DatanodeManager to DecommissionManager.

This message was sent by Atlassian JIRA

View raw message