hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bogdan M. Maryniuk" <bogdan.maryn...@gmail.com>
Subject Re: Wrong FS error
Date Thu, 09 Jul 2009 09:45:17 GMT
On Thu, Jul 9, 2009 at 1:15 PM, Saurabh Nanda<saurabhnanda@gmail.com> wrote:
> You are right. I am using /etc/hosts and my hadoop machines do not have
> proper DNS entries. However, why should that matter if I am using IP address
> in the configuration files?

Sorry, after I've posted, I found that it was not precisely correct.
:-) What I wanted to say is that your /etc/hosts might be configured
different on machines.

> Relevant entries from hadoop-site.xml:
> fs.default.name=hdfs://
> mapred.job.tracker=

Well, if you use /etc/hosts, why you bother with IP addresses then?

> slaves.xml:

Same here. Use host names instead and make sure they are resolved in
both way correctly. For example, "nslookup master-hadoop.local" should
return you "", but "nslookup" should give you
"master-hadoop.local". Same for slave.

> /etc/hosts on
>    master-hadoop    localhost.localdomain    localhost
> slave1-hadoop
> /etc/hosts on
>    slave1-hadoop    localhost.localdomain    localhost
> master-hadoop
> How should I go about fixing this?

Well, they are wrong anyway. For the start, remove "*-hadoop" from on both machines and RTFM about /etc/hosts here:
http://www.geo.arizona.edu/tools/man-cgi?hosts+5 I would suggest you
to go with something like this:

On should be:
--------------------------------------	localhost master-hadoop.local master-hadoop

On should be:
--------------------------------------	localhost slave1-hadoop.local slave1-hadoop

Are there some more slaves? They have to be managed in the same way.

P.S. It is difficult to manage /etc/hosts and you'd better switch to
local DNS instead.
Kind regards, BM

Things, that are stupid at the beginning, rarely ends up wisely.

View raw message