Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A19581003D for ; Mon, 9 Sep 2013 14:22:44 +0000 (UTC) Received: (qmail 40704 invoked by uid 500); 9 Sep 2013 14:22:38 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 40330 invoked by uid 500); 9 Sep 2013 14:22:37 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 40300 invoked by uid 99); 9 Sep 2013 14:22:36 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Sep 2013 14:22:36 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jayunit100@gmail.com designates 209.85.215.42 as permitted sender) Received: from [209.85.215.42] (HELO mail-la0-f42.google.com) (209.85.215.42) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 09 Sep 2013 14:22:32 +0000 Received: by mail-la0-f42.google.com with SMTP id ep20so5107999lab.1 for ; Mon, 09 Sep 2013 07:22:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=qs38OzAAyr+unuMF9RJAqo6Bd/Dj8mUNI694+d0Dv/U=; b=Ydin4WwbIVMyrJbRc/RlsCwYHMwnIcSvWzKvHA4r7UKf4duRdJ/BoDCoe8Hw7Ij9iD iutzGsHb7D09Q7FWOyzKNvtWnEbV1fZKxpNfiLmctDpfuBLpzMxGFv0mGEVoRgX+muHD Y6S1h+45iPYVls+572jcb6Cztfo+Y0o3vQI825SsvbkN2JKxgQ24xbTg5KpyA+VKKI9t G3U3vp8ySpRU+R5PPP0bWcMrakWGXICJIh53mlIvhTIyMA+yy2JXBCdcTM5D0oIuHiHw LzaJpmq9m29aCOZ7POW5oq+palddheXnWWu79l/NHB2WpmA9Jp6LSyRnIOLog0D55dCm 80eA== MIME-Version: 1.0 X-Received: by 10.152.170.166 with SMTP id an6mr16682001lac.20.1378736530301; Mon, 09 Sep 2013 07:22:10 -0700 (PDT) Received: by 10.112.126.231 with HTTP; Mon, 9 Sep 2013 07:22:10 -0700 (PDT) In-Reply-To: References: Date: Mon, 9 Sep 2013 10:22:10 -0400 Message-ID: Subject: Re: hadoop cares about /etc/hosts ? From: Jay Vyas To: "common-user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=089e0117747520ab9a04e5f41d03 X-Virus-Checked: Checked by ClamAV on apache.org --089e0117747520ab9a04e5f41d03 Content-Type: text/plain; charset=ISO-8859-1 Jitendra: When you say " check your masters file content" what are you referring to? On Mon, Sep 9, 2013 at 8:31 AM, Jitendra Yadav wrote: > Also can you please check your masters file content in hadoop conf > directory? > > Regards > JItendra > > On Mon, Sep 9, 2013 at 5:11 PM, Olivier Renault wrote: > >> Could you confirm that you put the hash in front of 192.168.6.10 >> localhost >> >> It should look like >> >> # 192.168.6.10 localhost >> >> Thanks >> Olivier >> On 9 Sep 2013 12:31, "Cipher Chen" wrote: >> >>> Hi everyone, >>> I have solved a configuration problem due to myself in hadoop cluster >>> mode. >>> >>> I have configuration as below: >>> >>> >>> fs.default.name >>> hdfs://master:54310 >>> >>> >>> a >>> nd the hosts file: >>> >>> >>> /etc/hosts: >>> 127.0.0.1 localhost >>> 192.168.6.10 localhost >>> ### >>> >>> 192.168.6.10 tulip master >>> 192.168.6.5 violet slave >>> >>> a >>> nd when i was trying to start-dfs.sh, namenode failed to start. >>> >>> >>> namenode log hinted that: >>> 13/09/09 17:09:02 INFO namenode.NameNode: Namenode up at: localhost/ >>> 192.168.6.10:54310 >>> ... >>> 13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 0 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 1 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:12 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 2 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 3 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:14 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 4 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 5 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:16 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 6 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 7 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:18 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 8 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> 13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: localhost/ >>> 127.0.0.1:54310. Already tried 9 time(s); retry policy is >>> RetryUpToMaximumCountWithF> >>> ... >>> >>> Now I know deleting the line "192.168.6.10 localhost ### >>> " >>> would fix this. >>> But I still don't know >>> >>> why hadoop would resolve "master" to "localhost/127.0.0.1." >>> >>> >>> Seems http://blog.devving.com/why-does-hbase-care-about-etchosts/explains this, >>> I'm not quite sure. >>> Is there any >>> other >>> explanation to this? >>> >>> Thanks. >>> >>> >>> -- >>> Cipher Chen >>> >> >> CONFIDENTIALITY NOTICE >> NOTICE: This message is intended for the use of the individual or entity >> to which it is addressed and may contain information that is confidential, >> privileged and exempt from disclosure under applicable law. If the reader >> of this message is not the intended recipient, you are hereby notified that >> any printing, copying, dissemination, distribution, disclosure or >> forwarding of this communication is strictly prohibited. If you have >> received this communication in error, please contact the sender immediately >> and delete it from your system. Thank You. > > > -- Jay Vyas http://jayunit100.blogspot.com --089e0117747520ab9a04e5f41d03 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Jitendra: =A0When you say " check your masters file c= ontent" =A0what are you referring to?
=

On Mon, Sep 9, 2013 at 8:31 AM, Jitendra= Yadav <jeetuyadav200890@gmail.com> wrote:
Also can you please check your masters = file content in hadoop conf directory?
=A0
Regards
JItendra<= /font>

On Mon, Sep 9, 2013 at 5:11 PM, Oli= vier Renault <orenault@hortonworks.com> wrote:

Could you confirm that you put the hash in front of 192.168.= 6.10=A0=A0=A0 localhost

It should look like

# 192.168.6.10=A0=A0=A0 localhost

Thanks
Olivier

On 9 Sep 2013 12:31, "Cipher Chen" <= ;cipher.chen= 2012@gmail.com> wrote:
Hi everyone,
=A0 I have solved a configuration problem due to myself in hadoop cluste= r mode.

I have configuration as below:

=A0 <property>
=A0= =A0=A0 <name>fs= .default.name</name>
=A0=A0=A0 <value>hdfs://master:54310</value>
=A0 </proper= ty>

a=20
nd the hosts file:


/etc/hosts:
127.0.0.1= =A0=A0=A0=A0=A0=A0 localhost
192.168.6.10=A0=A0=A0 localhost=20
###

192.168.6.10=A0=A0=A0 tulip master
192.1= 68.6.5=A0=A0=A0=A0 violet slave

a=20
nd when i was trying to start-dfs.sh, namenode failed to= start.


namenode log hinted that:
13/09/09 17:09:02 INFO name= node.NameNode: Namenode up at: localhost/192.168.6.10:54310
...
13/09/09 17:09:10 INFO ipc.Client: Retrying connect to server: local= host/127.0.0.1:54310<= /a>. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithF>= ;
13/09/09 17:09:11 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310
. Al= ready tried 1 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:12 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 2 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:13 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 3 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:14 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 4 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:15 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 5 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:16 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 6 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:17 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 7 time(s); retry policy is RetryUpToMaximumCountWithF>
13= /09/09 17:09:18 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Alre= ady tried 8 time(s); retry policy is RetryUpToMaximumCountWithF>
13/09/09 17:09:19 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Al= ready tried 9 time(s); retry policy is RetryUpToMaximumCountWithF>
..= .

Now I know deleting the line "192.168.6.10=A0=A0=A0= localhost=A0 ###
"=20
would fix this.
But I still don't know
=A0=20
why hadoop would resolve "master" to "loc= alhost/127.0.0.1."=

I'm not quite sure.
Is there any=20
=A0other
explanation to this?

Thanks.


--
Cipher Chen
=
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or e= ntity to which it is addressed and may contain information that is confiden= tial, privileged and exempt from disclosure under applicable law. If the re= ader of this message is not the intended recipient, you are hereby notified= that any printing, copying, dissemination, distribution, disclosure or for= warding of this communication is strictly prohibited. If you have received = this communication in error, please contact the sender immediately and dele= te it from your system. Thank You.




--
= Jay Vyas
ht= tp://jayunit100.blogspot.com
--089e0117747520ab9a04e5f41d03--