Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0A6C9FFA2 for ; Tue, 9 Apr 2013 04:41:23 +0000 (UTC) Received: (qmail 13489 invoked by uid 500); 9 Apr 2013 04:41:18 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 13242 invoked by uid 500); 9 Apr 2013 04:41:17 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 13234 invoked by uid 99); 9 Apr 2013 04:41:17 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 09 Apr 2013 04:41:17 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.214.176] (HELO mail-ob0-f176.google.com) (209.85.214.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 09 Apr 2013 04:41:11 +0000 Received: by mail-ob0-f176.google.com with SMTP id er7so6407220obc.7 for ; Mon, 08 Apr 2013 21:40:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type:x-gm-message-state; bh=WElXvQOl2azuh93cw7Ktxy1muJ6yZ+vY94Y+hWvp7bE=; b=HfVYzAqxOsMl313UdrLgH3YuQLPp6O1ej5Pe3nycWxiNhT8/2x+C5pgcCOK/0ANGyK C97hY941zWcdRi6RlKD3gLCrDKSRiUCNwZMfUxVUxOPjvHJg2uSbhWwpBFqsnUFYb1KE z+u6bY3ChBAtvGaLiq/jJo2MaUyb1y3GsmB8mi1OXXon+k+FrKVy/Q6l+yGuax+xcn2C qLA3jrak0+a3HmSxjiiwMF7qvNF+o8PmoW5hE7iOD8XZYV3/WqHvLsF9HB3I7XFNYbj4 FpFUiR++OC9LVTxFsRBx7q/cBwKBR2s79T5JAduQ1W/YG7TWbJGNS7mO39Tu+hfvHJzJ 4y8w== MIME-Version: 1.0 X-Received: by 10.60.141.35 with SMTP id rl3mr17675414oeb.121.1365482450303; Mon, 08 Apr 2013 21:40:50 -0700 (PDT) Received: by 10.76.167.97 with HTTP; Mon, 8 Apr 2013 21:40:50 -0700 (PDT) In-Reply-To: References: Date: Mon, 8 Apr 2013 21:40:50 -0700 Message-ID: Subject: Re: Problem accessing HDFS from a remote machine From: Rishi Yadav To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b339dbb65d0b504d9e62866 X-Gm-Message-State: ALoCoQlPpDAJhf4tbEEqXvaD9SJyABJP9/P6LWkXMIcQ4spA/xnLs/ptCNrPWWIKsBy2DtlGZqr5 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b339dbb65d0b504d9e62866 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable have you checked firewall on namenode. If you are running ubuntu and namenode port is 8020 command is -> ufw allow 8020 Thanks and Regards, Rishi Yadav InfoObjects Inc || http://www.infoobjects.com *(Big Data Solutions)* On Mon, Apr 8, 2013 at 6:57 PM, Azuryy Yu wrote: > can you use command "jps" on your localhost to see if there is NameNode > process running? > > > On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson wrote: > >> Yes, the namenode port is not open for your cluster. I had this problem >> to. First, log into your namenode and do netstat -nap to see what ports = are >> listening. You can do service --status-all to see if the namenode servic= e >> is running. Basically you need Hadoop to bind to the correct ip (an >> external one, or at least reachable from your remote machine). So listen= ing >> on 127.0.0.1 or localhost or some ip for a private network will not be >> sufficient. Check your /etc/hosts file and /etc/hadoop/conf/*-site.xml >> files to configure the correct ip/ports. >> >> I'm no expert, so my understanding might be limited/wrong...but I hope >> this helps :) >> >> Best, >> B >> >> >> On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain = wrote: >> >>> Hi All,**** >>> >>> ** ** >>> >>> I have setup a single node cluster(release hadoop-1.0.4). Following is >>> the configuration used =96**** >>> >>> ** ** >>> >>> *core-site.xml :-* >>> >>> ** ** >>> >>> **** >>> >>> fs.default.name**** >>> >>> hdfs://localhost:54310 **** >>> >>> **** >>> >>> * * >>> >>> *masters:-* >>> >>> localhost**** >>> >>> ** ** >>> >>> *slaves:-* >>> >>> localhost**** >>> >>> ** ** >>> >>> I am able to successfully format the Namenode and perform files system >>> operations by running the CLIs on Namenode.**** >>> >>> ** ** >>> >>> But I am receiving following error when I try to access HDFS from a *re= mote >>> machine* =96 **** >>> >>> ** ** >>> >>> $ bin/hadoop fs -ls /**** >>> >>> Warning: $HADOOP_HOME is deprecated.**** >>> >>> ** ** >>> >>> 13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 0 time(s).**** >>> >>> 13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 1 time(s).**** >>> >>> 13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 2 time(s).**** >>> >>> 13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 3 time(s).**** >>> >>> 13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 4 time(s).**** >>> >>> 13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 5 time(s).**** >>> >>> 13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 6 time(s).**** >>> >>> 13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 7 time(s).**** >>> >>> 13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 8 time(s).**** >>> >>> 13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: >>> 10.209.10.206/10.209.10.206:54310. Already tried 9 time(s).**** >>> >>> Bad connection to FS. command aborted. exception: Call to >>> 10.209.10.206/10.209.10.206:54310 failed on connection exception: >>> java.net.ConnectException: Connection refused**** >>> >>> ** ** >>> >>> Where 10.209.10.206 is the IP of the server hosting the Namenode and it >>> is also the configured value for =93fs.default.name=94 in the core-site= .xml >>> file on the remote machine.**** >>> >>> ** ** >>> >>> Executing =91*bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /*=92 al= so >>> result in same output.**** >>> >>> ** ** >>> >>> Also, I am writing a C application using libhdfs to communicate with >>> HDFS. How do we provide credentials while connecting to HDFS?**** >>> >>> ** ** >>> >>> Thanks**** >>> >>> Saurabh**** >>> >>> ** ** >>> >>> ** ** >>> >> >> > --047d7b339dbb65d0b504d9e62866 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
have you checked firewall on namenode.=A0

If you are running ubuntu and namenode port is 8020 command is=A0<= /div>
-> ufw allow 8020

Thanks and Regards,

Rishi= Yadav

InfoObjects Inc ||=A0http://www.infoobjects.com(Big Data Solutions)


On Mon, Apr 8, 2013 at 6:57 PM, Azuryy= Yu <azuryyyu@gmail.com> wrote:
can you use command "jps" on your localhost to s= ee if there is NameNode process running?


On Tue, Apr 9, 2013 at 2:27 AM, Bjorn Jonsson <bjornjon@gmail.com&g= t; wrote:
Yes, the namenode port is n= ot open for your cluster. I had this problem to. First, log into your namen= ode and do netstat -nap to see what ports are listening. You can do service= --status-all to see if the namenode service is running. Basically you need= Hadoop to bind to the correct ip (an external one, or at least reachable f= rom your remote machine). So listening on 127.0.0.1 or localhost or some ip= for a private network will not be sufficient. Check your /etc/hosts file a= nd /etc/hadoop/conf/*-site.xml files to configure the correct ip/ports.
I'm no expert, so my understanding might be limited/wron= g...but I hope this helps :)

Best,
B


On Mon, Apr 8, 2013 at 7:29 AM, Saurabh Jain= <Saurabh_Jain@symantec.com> wrote:

Hi All,

=A0

I have setup a single node cluster(release hadoop-1.= 0.4). Following is the configuration used =96

=A0

core-site.xml :-<= u>

=A0

<prop= erty>

=A0=A0 =A0=A0<name><= a href=3D"http://fs.default.name" target=3D"_blank">fs.default.name<= /name>

=A0=A0 =A0=A0<value>hdfs://localhost:54310<= /value>

</property>

=A0

masters:-

localhost

=A0

slaves:-

=

localhost

=A0

I am able to successfully format the Namenode and pe= rform files system operations by running the CLIs on Namenode.

=A0

But= I am receiving following error when I try to access HDFS from a remote = machine =96

=A0

$ bin/ha= doop fs -ls /

Warning: $HADOOP_HOME= is deprecated.

=A0

13/04/08 07:13:56 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 0 time(s).

13/04/08 07:13:57 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 1 time(s).

13/04/08 07:13:58 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 2 time(s).

13/04/08 07:13:59 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 3 time(s).

13/04/08 07:14:00 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 4 time(s).

13/04/08 07:14:01 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 5 time(s).

13/04/08 07:14:02 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 6 time(s).

13/04/08 07:14:03 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 7 time(s).

13/04/08 07:14:04 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 8 time(s).

13/04/08 07:14:05 INFO ipc.Client: Retrying connect to server: 10.209.10.206/10= .209.10.206:54310. Already tried 9 time(s).

Bad connection to FS. command aborted. exception: Call to 10.209.10.206/10.209.= 10.206:54310 failed on connection exception: java.net.ConnectException:= Connection refused

=A0

Where 10= .209.10.206 is the IP of the server hosting the Namenode and it=A0 is also = the configured value for =93fs.default.name=94 in the core-site.xml file on the remote machin= e.

=A0

Executin= g =91bin/hadoop fs -fs hdfs://10.209.10.206:54310 -ls /=92 also result in same out= put.

=A0

Also, I = am writing a C application using libhdfs to communicate with HDFS. How do w= e provide credentials while connecting to HDFS?

=A0

Thanks

Saurabh

=A0

=A0




--047d7b339dbb65d0b504d9e62866--