hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3426) Datanode does not start up if the local machines DNS isnt working right and dfs.datanode.dns.interface==default
Date Fri, 07 Nov 2008 07:36:44 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Douglas updated HADOOP-3426:
----------------------------------

    Status: Open  (was: Patch Available)

Sorry this sat in the patch queue for so long without being reviewed.

* DNS::reverseDns(Inet4Address,String) has some commented-out code in it that should be removed
* Shouldn't cachedHostAddress and cachedHostName be final, rather than volatile Strings? Calling
a method to initialize these is a good idea, but doing it lazily seems to offer no advantages.
{code}
  private static final String cachedHostname = getLocalHostname();

  private static String getLocalHostname() {
    String localhost;
    try {
      localhost = InetAddress.getLocalHost().getCanonicalHostName();
    } catch (UnknownHostException e) {
      LOG.info("Unable to determine local hostname "
              + "-falling back to \""+LOCALHOST+"\"", e);
      localhost = LOCALHOST;
    }
    return localhost;
  }
{code}
These should also be listed at the top of the class, with the other fields. Since they're
never updated, "cached" seems like the wrong name.
* Until something productive is done with IPv6 addresses, the effort to throw from its handler
method seems ill spent. The check for an Inet4Address is worthwhile, but it should be in the
existing, public reverseDns method. The new, private overloads are unnecessary.

> Datanode does not start up if the local machines DNS isnt working right and dfs.datanode.dns.interface==default
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3426
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3426
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.19.0
>         Environment: Ubuntu 8.04, at home, no reverse DNS
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Minor
>         Attachments: hadoop-3426.patch, hadoop-3426.patch, hadoop-3426.patch
>
>
> This is the third Java project I've been involved in that doesnt work on my home network,
due to implementation issues with  java.net.InetAddress.getLocalHost(), issues that only show
up on an unamanged network. Fortunately my home network exists to find these problems early.
> In hadoop, if the local hostname doesnt resolve, the datanode does not start up:
> Caused by: java.net.UnknownHostException: k2: k2
> at java.net.InetAddress.getLocalHost(InetAddress.java:1353)
> at org.apache.hadoop.net.DNS.getDefaultHost(DNS.java:185)
> at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:184)
> at org.apache.hadoop.dfs.DataNode.(DataNode.java:162)
> at org.apache.hadoop.dfs.ExtDataNode.(ExtDataNode.java:55)
> at org.smartfrog.services.hadoop.components.datanode.DatanodeImpl.sfStart(DatanodeImpl.java:60)
> While this is a valid option in a production (non-virtual) cluster, if you are playing
with VMWare/Xen private networks or on a home network, you can't rely on DNS. 
> 1. In these situations, its usually better to fall back to using "localhost" or 127.0.0.1
as a hostname if Java can't work it out for itself,
> 2. Its often good to cache this if used in lots of parts of the system, otherwise the
30s timeouts can cause problems of their own.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message