hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Olivier Renault <orena...@hortonworks.com>
Subject Re: hadoop cares about /etc/hosts ?
Date Mon, 09 Sep 2013 11:41:37 GMT
Could you confirm that you put the hash in front of 192.168.6.10
localhost

It should look like

# 192.168.6.10    localhost

Thanks
Olivier
On 9 Sep 2013 12:31, "Cipher Chen" <cipher.chen2012@gmail.com> wrote:

> Hi everyone,
>   I have solved a configuration problem due to myself in hadoop cluster
> mode.
>
> I have configuration as below:
>
>   <property>
>     <name>fs.default.name</name>
>     <value>hdfs://master:54310</value>
>   </property>
>
> a
> nd the hosts file:
>
>
> /etc/hosts:
> 127.0.0.1       localhost
> 
> 
> 192.168.6.10    localhost
> ###
>
> 192.168.6.10    tulip master
> 192.168.6.5     violet slave
>
> a
> nd when i was trying to start-dfs.sh, namenode failed to start.
>
>
> namenode log hinted that:
> 13/09/09 17:09:02 INFO namenode.NameNode: Namenode up at: localhost/
> 192.168.6.10:54310
> ...
> 13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 0 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 1 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:12 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 2 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 3 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:14 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 4 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 5 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:16 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 6 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 7 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:18 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 8 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> 13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: localhost/
> 127.0.0.1:54310. Already tried 9 time(s); retry policy is
> RetryUpToMaximumCountWithF>
> ...
>
> Now I know deleting the line "192.168.6.10    localhost  ###
> 
> "
>  would fix this.
> But I still don't know
>
> why hadoop would resolve "master" to "localhost/127.0.0.1."
>
>
> Seems http://blog.devving.com/why-does-hbase-care-about-etchosts/explains this,
> I'm not quite sure.
> Is there any
>  other
> explanation to this?
>
> Thanks.
>
>
> --
> Cipher Chen
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message