hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andy Isaacson (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-3934) duplicative dfs_hosts entries handled wrong
Date Thu, 13 Sep 2012 17:38:07 GMT
Andy Isaacson created HDFS-3934:

             Summary: duplicative dfs_hosts entries handled wrong
                 Key: HDFS-3934
                 URL: https://issues.apache.org/jira/browse/HDFS-3934
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 2.0.1-alpha
            Reporter: Andy Isaacson
            Assignee: Andy Isaacson
            Priority: Minor

A dead DN listed in dfs_hosts_allow.txt by IP and in dfs_hosts_exclude.txt by hostname ends
up being displayed twice in {{dfsnodelist.jsp?whatNodes=DEAD}} after the NN restarts because
{{getDatanodeListForReport}} does not handle such a "pseudo-duplicate" correctly:
# the "Remove any nodes we know about from the map" loop no longer has the knowledge to remove
the spurious entries
# the "The remaining nodes are ones that are referenced by the hosts files" loop does not
do hostname lookups, so does not know that the IP and hostname refer to the same host.

Relatedly, such an IP-based dfs_hosts entry results in a cosmetic problem in the JSP output:
 The *Node* column shows ":50010" as the nodename, with HTML markup {{<a href="http://:50075/browseDirectory.jsp?namenodeInfoPort=50070&amp;dir=%2F&amp;nnaddr="

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message