hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-134) premature end-of-decommission of datanodes
Date Fri, 18 Jul 2014 05:21:06 GMT

     [ https://issues.apache.org/jira/browse/HDFS-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Allen Wittenauer resolved HDFS-134.

    Resolution: Fixed

This has probably been fixed. Gonna close this as stale.

> premature end-of-decommission of datanodes
> ------------------------------------------
>                 Key: HDFS-134
>                 URL: https://issues.apache.org/jira/browse/HDFS-134
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: dhruba borthakur
> Decommissioning requires that the nodes be listed in the dfs.hosts.excludes file. The
administrator runs the "dfsadmin -refreshNodes" command. The decommissioning process starts
off. Suppose that one of the datanodes that was being decommisioned has to re-register with
the namenode. This can occur if the namenode restarts or if the datanode restarts while the
decommissioning was in progress. Now, the namenode refuses to talk to this datanode because
it is in the excludes list! This is a premature end of the decommissioning process.
> One way to fix this bug is to make the namenode always accept registration requests,
even for datanodes that are in the exclude list. The namenode, however, should set the "being
decommissioned" flag for these datanodes. It should then re-start the decommisioning process
for these datanodes. When the decommissioning is complete, the namenode will ask the datanodes
to shutdown.

This message was sent by Atlassian JIRA

View raw message