hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jingguo yao <yaojing...@gmail.com>
Subject Re: Problem with Hadoop and /etc/hosts file
Date Sat, 29 Sep 2012 02:43:04 GMT
I have the problem as Alberto. And I have followed Harsh's guide to
solve it. But I still get the error logging message. The following
code in org.apache.hadoop.hbase.mapreduce.TableInputFormatBase
produces the error message.

try {
  regionLocation = reverseDNS(regionAddress);
} catch (NamingException e) {
  LOG.error("Cannot resolve the host name for " + regionAddress +
      " because of " + e);
  regionLocation = regionServerAddress.getHostname();

So reverse DNS lookup must be working to remove this error message.
And HBase 0.94.1 reference guide says "Both forward and reverse DNS
resolving should work.". After do the following configuration, the
error message is gone:

1. Edit /etc/resolv.conf to comment nameserver entry.
2. Run "dnsmasq".

http://hbase.apache.org/book.html mentions a hadoop-dns-checker tool.
Although I have not tried this tool for DNS checking, I think that it
is worth of a try when we have DNS problems with HBase cluster.

And the above code can still works without DNS reverse lookup in my
case. Variable regionLocation get the same value with or without DNS
reverse lookup.

On Fri, Sep 21, 2012 at 10:37 AM, Harsh J <harsh@cloudera.com> wrote:
> This is what I would consider a simple-enough, sane networking setup
> (and can assert that it works very well):
> [NOTE: This is for simple, small clusters built by folks who are much
> too new to networking/haven't too much time. Of course, if you know
> what your DNS setup and resolution ought to look like, ignore this!]
> 1. A loopback address entry must exist in /etc/hosts. This must never
> be removed. " localhost.localdomain localhost" as the first
> line is an absolute, whether your services are going to utilize it, or
> not.
> 2. If you are looking at a small cluster and feel OK with just using
> /etc/hosts, then each of your hosts must be present in the /etc/hosts
> file used in the cluster. A line that goes "EXT.ERN.AL.IP
> host01.domain host01", repeated properly for each host in the cluster
> every node should know about (including itself, more importantly),
> must exist.
> 3. The (1) and (2) completes your /etc/hosts setup and may in the end
> look like this on ALL nodes, for an example (Yes, you may rsync/rdist
> it across):
> localhost.localdomain localhost
> node01.cluster node01
> node02.cluster node02
> node03.cluster node03
> [NOTE: The IPs must come from the external NIC interface (eth0, etc.)
> address reported on each node via "ifconfig". I'll leave IP-assignment
> and DHCP usage outside of these guidelines.]
> 4. The /etc/nsswitch.conf must have, for its hosts entry, the config
> "hosts: files dns". This is usually the default - but ensure it is so
> on all nodes.
> 5. (3) and (4) done now makes sure that when resolution is demanded,
> the /etc/hosts file is what will be used, and that the /etc/hosts file
> is a good piece now.
> 6. Next step is to make sure that "hostname -f" and "hostname -s"
> report proper values on the whole cluster, for each node. The hostname
> of a machine is vital to be set to match the entry we refer to it as,
> via the /etc/hosts file. Know that the /etc/hosts file is a lookup
> file but the hostname comes from the OS itself, when self-queried by
> applications and tools.
> 7. For CentOS/RH/Fedora/etc. kinda distros, see
> http://www.electrictoolbox.com/changing-hostname-centos/ (File:
> /etc/sysconfig/network, config name HOSTNAME). For Ubuntu/Debian/etc.
> kinda distros, see
> http://www.ducea.com/2006/08/07/how-to-change-the-hostname-of-a-linux-system/
> for one example (File: /etc/hostname, one line simple entry there).
> 8. Once the hostnames config on the OS match the corresponding node
> name defined in /etc/hosts, the "hostname -f" should, on node1, report
> "node01.cluster" and "hostname -s" should report "node01".
> 9. With (5) and (8) properly done now, stuff will work fine. Begin
> your Hadoop/HBase configs.
> HTH some folks building out their new, small clusters. I personally
> used bind9 on the first system I built, but I had way too much time
> then to sit down and debug whitespace issues :)

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message