Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5FA2D10086 for ; Mon, 9 Sep 2013 14:27:20 +0000 (UTC) Received: (qmail 54121 invoked by uid 500); 9 Sep 2013 14:27:14 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 53895 invoked by uid 500); 9 Sep 2013 14:27:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 53885 invoked by uid 99); 9 Sep 2013 14:27:11 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Sep 2013 14:27:11 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shahab.yunus@gmail.com designates 209.85.214.49 as permitted sender) Received: from [209.85.214.49] (HELO mail-bk0-f49.google.com) (209.85.214.49) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Sep 2013 14:27:06 +0000 Received: by mail-bk0-f49.google.com with SMTP id r7so2332342bkg.22 for ; Mon, 09 Sep 2013 07:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=NN2ZGy27qlXPOCTSdFMpbx6GBqJVB4tRuLQ2drYeVj8=; b=Kl9ObC+/2QXOrAR2BrPUbliUZLAawY22eX4IgmjF5dLOO3luG8wo2tZYTydOUJE4CT 9DLrFah+bwjHReZxzW2JiSDnTP/bctaAgZszkVWxcisiRNXoxD7NpuyLb+toLZUh1/4i dO6IqNyzmHtbXXlLziBRY/9b5S84S/n/6u+qzYt2Xrfi3tznE1/qZWKTq3hfXrRUu6I2 X2lhG3HvVL46+5+bweUbJ6ImSDXx6p6gzIKOMaSjJCTWGr55ojsQ8eK3ZyBp+PtVv4nH MTdCNMZVLUfC5AJSUXxl6CvwjbTgHmhChBdb/+5KNX/XXCV7mbW6B1lhcKIC7NEUfg/m XV8g== MIME-Version: 1.0 X-Received: by 10.204.64.78 with SMTP id d14mr104271bki.40.1378736805702; Mon, 09 Sep 2013 07:26:45 -0700 (PDT) Received: by 10.204.231.76 with HTTP; Mon, 9 Sep 2013 07:26:45 -0700 (PDT) In-Reply-To: References: Date: Mon, 9 Sep 2013 10:26:45 -0400 Message-ID: Subject: Re: hadoop cares about /etc/hosts ? From: Shahab Yunus To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a11c396e08b2dcc04e5f42d29 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c396e08b2dcc04e5f42d29 Content-Type: text/plain; charset=ISO-8859-1 I think he means the 'masters' file found only at the master node(s) at conf/masters. Details here: http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#masters-vs-slaves Regards, Shahab On Mon, Sep 9, 2013 at 10:22 AM, Jay Vyas wrote: > Jitendra: When you say " check your masters file content" what are you > referring to? > > > On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav > wrote: > >> Also can you please check your masters file content in hadoop conf >> directory? >> >> Regards >> JItendra >> >> On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault > > wrote: >> >>> Could you confirm that you put the hash in front of 192.168.6.10 >>> localhost >>> >>> It should look like >>> >>> # 192.168.6.10 localhost >>> >>> Thanks >>> Olivier >>> On 9 Sep 2013 12:31, "Cipher Chen" wrote: >>> >>>> Hi everyone, >>>> I have solved a configuration problem due to myself in hadoop cluster >>>> mode. >>>> >>>> I have configuration as below: >>>> >>>> >>>> fs.default.name >>>> hdfs://master:54310 >>>> >>>> >>>> a >>>> nd the hosts file: >>>> >>>> >>>> /etc/hosts: >>>> 127.0.0.1 localhost >>>> 192.168.6.10 localhost >>>> ### >>>> >>>> 192.168.6.10 tulip master >>>> 192.168.6.5 violet slave >>>> >>>> a >>>> nd when i was trying to start-dfs.sh, namenode failed to start. >>>> >>>> >>>> namenode log hinted that: >>>> 13/09/09 17:09:02 INFO namenode.NameNode: Namenode up at: localhost/ >>>> 192.168.6.10:54310 >>>> ... >>>> 13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 0 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 1 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:12 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 2 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 3 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:14 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 4 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 5 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:16 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 6 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 7 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:18 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 8 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> 13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: >>>> localhost/127.0.0.1:54310. Already tried 9 time(s); retry policy is >>>> RetryUpToMaximumCountWithF> >>>> ... >>>> >>>> Now I know deleting the line "192.168.6.10 localhost ### >>>> " >>>> would fix this. >>>> But I still don't know >>>> >>>> why hadoop would resolve "master" to "localhost/127.0.0.1." >>>> >>>> >>>> Seems http://blog.devving.com/why-does-hbase-care-about-etchosts/explains this, >>>> I'm not quite sure. >>>> Is there any >>>> other >>>> explanation to this? >>>> >>>> Thanks. >>>> >>>> >>>> -- >>>> Cipher Chen >>>> >>> >>> CONFIDENTIALITY NOTICE >>> NOTICE: This message is intended for the use of the individual or entity >>> to which it is addressed and may contain information that is confidential, >>> privileged and exempt from disclosure under applicable law. If the reader >>> of this message is not the intended recipient, you are hereby notified that >>> any printing, copying, dissemination, distribution, disclosure or >>> forwarding of this communication is strictly prohibited. If you have >>> received this communication in error, please contact the sender immediately >>> and delete it from your system. Thank You. >> >> >> > > > -- > Jay Vyas > http://jayunit100.blogspot.com > --001a11c396e08b2dcc04e5f42d29 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I think he means the 'masters' file found only at = the master node(s) at conf/masters.

Details here:
<= div>http://www.michael-noll.com= /tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/#masters-vs-sl= aves

Regards,
Shahab


On Mon, Sep 9, 2013 at = 10:22 AM, Jay Vyas <jayunit100@gmail.com> wrote:
Jitendra: =A0When you say &= quot; check your masters file content" =A0what are you referring to?


On Mon, Sep 9, 2013 at 8:31 AM, Jitendra= Yadav <jeetuyadav200890@gmail.com> wrote:
Also can you please check your masters = file content in hadoop conf directory?
=A0
Regards
JItendra

On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault <orenault= @hortonworks.com> wrote:

Could you confirm that you put the hash in front of 192.168.= 6.10=A0=A0=A0 localhost

It should look like

# 192.168.6.10=A0=A0=A0 localhost

Thanks
Olivier

On 9 Sep 2013 12:31, "Cipher Chen" <= ;cipher.chen= 2012@gmail.com> wrote:
Hi everyone,
=A0 I have solved a configuration problem due to myself in hadoop cluste= r mode.

I have configuration as below:

=A0 <property>
=A0= =A0=A0 <name>fs= .default.name</name>
=A0=A0=A0 <value>hdfs://master:54310</value>
=A0 </proper= ty>

a=20
nd the hosts file:


/etc/hosts:
127.0.0.1= =A0=A0=A0=A0=A0=A0 localhost
192.168.6.10=A0=A0=A0 localhost=20
###

192.168.6.10=A0=A0=A0 tulip master
192.1= 68.6.5=A0=A0=A0=A0 violet slave

a=20
nd when i was trying to start-dfs.sh, namenode failed to= start.


namenode log hinted that:
13/09/09 17:09:02 INFO name= node.NameNode: Namenode up at: localhost/192.168.6.10:54310
...
13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: local= host/127.0.0.1:54310<= /a>. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithF>= ;
13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310
. Al= ready tried 1 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:12 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 2 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 3 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:14 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 4 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 5 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 6 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 7 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 8 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 9 time(s); retry policy is RetryUpToMaximumCountWithF>
..= .

Now I know deleting the line "192.168.6.10=A0=A0=A0= localhost=A0 ###
"=20
would fix this.
But I still don't know
=A0=20
why hadoop would resolve "master" to "loc= alhost/127.0.0.1."=

I'm not quite sure.
Is there any=20
=A0other
explanation to this?

Thanks.


--
Cipher Chen

CON= FIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or e= ntity to which it is addressed and may contain information that is confiden= tial, privileged and exempt from disclosure under applicable law. If the re= ader of this message is not the intended recipient, you are hereby notified= that any printing, copying, dissemination, distribution, disclosure or for= warding of this communication is strictly prohibited. If you have received = this communication in error, please contact the sender immediately and dele= te it from your system. Thank You.




--
Jay Vyas
http://jayunit10= 0.blogspot.com

--001a11c396e08b2dcc04e5f42d29--