hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "lixiangna (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-3500) decommission node is both in the "Live Datanodes" with "In Service" status, and in the "Dead Datanodes" of the dfs namenode web ui.
Date Thu, 05 Jun 2008 16:05:45 GMT
decommission node is both in the "Live Datanodes" with "In Service" status, and in the "Dead
Datanodes" of the dfs namenode web ui.
-----------------------------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-3500
                 URL: https://issues.apache.org/jira/browse/HADOOP-3500
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.17.0
         Environment: linux-2.6.9
            Reporter: lixiangna


try to decommission a node by the following the steps:
(1) write the hostname of node which will be decommissioned in a file (the exclude file)
(2) specified the absolute path of the exclude file as a configuration parameter dfs.hosts.exclude.
(3) run "bin/hadoop dfsadmin -refreshNodes".

It is surprising that the node is found both in the "Live Datanodes" with "In Service" status,
and in the "Dead Datanodes" of the dfs namenode web ui. When copy new data to the HDFS, its
Used size is increasing as other un-decommissioned nodes. Obviously it is in service. Restarting
the HDFS or waiting a long time(two day) havn't make the decommission yet.

the more strange thing, If nodes are configured as the include nodes by similar steps, then
these include nodes and
the exclude node are all only in the "Dead Datanodes" lists. 

I did many times tests in both 0.17.0 and 0.15.1. The results is same. So i think there maybe
bugs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message