hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Christian Kunz (JIRA)" <j...@apache.org>
Subject [jira] Created: (HADOOP-3433) dfs.hosts.exclude not working as expected
Date Wed, 21 May 2008 22:11:56 GMT
dfs.hosts.exclude not working as expected
-----------------------------------------

                 Key: HADOOP-3433
                 URL: https://issues.apache.org/jira/browse/HADOOP-3433
             Project: Hadoop Core
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.16.3
            Reporter: Christian Kunz


We had to decommission a lot of hosts.

Therefore, we added them to dfs.hosts.exclude and called 'dfsadmin -refreshNodes'.
The list of excluded nodes appeared in the list of dead nodes (and still remained the list
of live nodes), but there was no replication going on for more than 20 hours (no NameSystem.addStoredBlock
messages in the namenode log).

A few hours ago we stopped a datanode from that list. After it moved from the live node list
to the dead node list (double entry), replication started immediately and completed after
about 1 hour (replicated ~ 10,000 blocks).

Somehow, mere exclusion does not trigger replication as it should.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message