Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 5F78192D9 for ; Wed, 11 Jan 2012 16:49:29 +0000 (UTC) Received: (qmail 19556 invoked by uid 500); 11 Jan 2012 16:49:28 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 19467 invoked by uid 500); 11 Jan 2012 16:49:27 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 19459 invoked by uid 99); 11 Jan 2012 16:49:27 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2012 16:49:27 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of praveensripati@gmail.com designates 209.85.215.48 as permitted sender) Received: from [209.85.215.48] (HELO mail-lpp01m010-f48.google.com) (209.85.215.48) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 11 Jan 2012 16:49:20 +0000 Received: by lagp5 with SMTP id p5so544955lag.35 for ; Wed, 11 Jan 2012 08:48:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=PFJonnl2jI+hANSc3tAAJ8BS0RZyv9EmvtdETaozxT4=; b=pnWRVSbhM97UZ0RWHPrBiuSiznt/ASJfKVS+tNamkw2sFfUEyAw9EG8rzx+3SkvfX8 1Q+rNoeYhpgRthIiETeZZ0+draw59zm+wnP6NZLTbqX83OWocJmls/bUi2cK4FbbnXjh nWXpn+LoRyzgP8VFU32LgDZ5slWigNwmf6gvU= MIME-Version: 1.0 Received: by 10.112.84.170 with SMTP id a10mr2528585lbz.22.1326300539502; Wed, 11 Jan 2012 08:48:59 -0800 (PST) Received: by 10.112.84.66 with HTTP; Wed, 11 Jan 2012 08:48:59 -0800 (PST) In-Reply-To: References: Date: Wed, 11 Jan 2012 22:18:59 +0530 Message-ID: Subject: Re: HDFS Federation Exception From: Praveen Sripati To: hdfs-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d040168b985c11d04b6436854 X-Virus-Checked: Checked by ClamAV on apache.org --f46d040168b985c11d04b6436854 Content-Type: text/plain; charset=ISO-8859-1 Suresh, Here is the JIRA - https://issues.apache.org/jira/browse/HDFS-2778 Regards, Praveen On Wed, Jan 11, 2012 at 9:28 PM, Suresh Srinivas wrote: > Thanks for figuring that. Could you create an HDFS Jira for this issue? > > > On Wednesday, January 11, 2012, Praveen Sripati > wrote: > > Hi, > > > > The documentation (1) suggested to set the > `dfs.namenode.rpc-address.ns1` property to `hdfs://nn-host1:rpc-port` in > the example. Changing the value to `nn-host1:rpc-port` (removing hdfs://) > solved the problem. The document needs to be updated. > > > > (1) - > http://hadoop.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.html > > > > Praveen > > > > On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripati < > praveensripati@gmail.com> wrote: > > > > Hi, > > > > Got the latest code to see if any bugs were fixed and did try federation > with the same configuration, but was getting similar exception. > > > > 2012-01-11 15:25:35,321 ERROR namenode.NameNode > (NameNode.java:main(803)) - Exception in namenode join > > java.io.IOException: Failed on local exception: > java.net.SocketException: Unresolved address; Host Details : local host is: > "hdfs"; destination host is: "(unknown):0; > > at > org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:895) > > at org.apache.hadoop.ipc.Server.bind(Server.java:231) > > at org.apache.hadoop.ipc.Server$Listener.(Server.java:313) > > at org.apache.hadoop.ipc.Server.(Server.java:1600) > > at org.apache.hadoop.ipc.RPC$Server.(RPC.java:576) > > at > org.apache.hadoop.ipc.WritableRpcEngine$Server.(WritableRpcEngine.java:322) > > at > org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:282) > > at > org.apache.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46) > > at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550) > > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:145) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:356) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:334) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:458) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:450) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:751) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:799) > > Caused by: java.net.SocketException: Unresolved address > > at sun.nio.ch.Net.translateToSocketException(Net.java:58) > > at sun.nio.ch.Net.translateException(Net.java:84) > > at sun.nio.ch.Net.translateException(Net.java:90) > > at > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:61) > > at org.apache.hadoop.ipc.Server.bind(Server.java:229) > > ... 14 more > > Caused by: java.nio.channels.UnresolvedAddressException > > at sun.nio.ch.Net.checkAddress(Net.java:30) > > at > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:122) > > at > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > > ... 15 more > > > > Regards, > > Praveen > > > > On Wed, Jan 11, 2012 at 12:24 PM, Praveen Sripati < > praveensripati@gmail.com> wrote: > > > > Hi, > > > > I am trying to setup a HDFS federation and getting the below error. > Also, pasted the core-site.xml and hdfs-site.xml at the bottom of the mail. > Did I miss something in the configuration files? > > > > 2012-01-11 12:12:15,759 ERROR namenode.NameNode > (NameNode.java:main(803)) - Exception in namenode join > > java.lang.IllegalArgumentException: Can't parse port '' > > at > org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:198) > > at > org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:174) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:228) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:205) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:266) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:317) > > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:329) > > at org.apache.hadoop.hdfs.server.namenode.N > --f46d040168b985c11d04b6436854 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Suresh,

Here is the JIRA - <= a href=3D"https://issues.apache.org/jira/browse/HDFS-2778">https://issues.a= pache.org/jira/browse/HDFS-2778

Regards,
Praveen

On Wed, Jan 11, 2012 at 9:28 PM, Suresh Srinivas= <suresh@hor= tonworks.com> wrote:
Thanks for figuring that. Could you create an HDFS Jira for this issue?


On Wednesday, January 11, 2012,= Praveen Sripati <praveensripati@gmail.com> wrote:
> Hi,
>
> The documentation (1) suggested to set the `dfs.namenode.rpc-a= ddress.ns1` property to `hdfs://nn-host1:rpc-port` in the example. Changing= the value to `nn-host1:rpc-port` (removing hdfs://) solved the problem. Th= e document needs to be updated.
>
> (1) - http://hado= op.apache.org/common/docs/r0.23.0/hadoop-yarn/hadoop-yarn-site/Federation.h= tml
>
> Praveen
>
> On Wed, Jan 11, 2012 at 3:40 PM, Praveen Sripa= ti <pravee= nsripati@gmail.com> wrote:
>
> Hi,
>
> Got t= he latest code to see if any bugs were fixed and did try federation with th= e same configuration, but was getting similar exception.
>
> 2012-01-11 15:25:35,321 ERROR namenode.NameNode (NameNode.j= ava:main(803)) - Exception in namenode join
> java.io.IOException: Fa= iled on local exception: java.net.SocketException: Unresolved address; Host= Details : local host is: "hdfs"; destination host is: "(unk= nown):0;
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.net.NetUtils.wrapException(= NetUtils.java:895)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.S= erver.bind(Server.java:231)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.had= oop.ipc.Server$Listener.<init>(Server.java:313)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.Server.<init>(Ser= ver.java:1600)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC$S= erver.<init>(RPC.java:576)
> =A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.ipc.WritableRpcEngine$Server.<init>(WritableRpcEngine.java:3= 22)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.WritableRpcEngine.getSe= rver(WritableRpcEngine.java:282)
> =A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.ipc.WritableRpcEngine.getServer(WritableRpcEngine.java:46)
>= =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550)=
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNo= deRpcServer.<init>(NameNodeRpcServer.java:145)
> =A0=A0=A0=A0= =A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServe= r(NameNode.java:356)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdf= s.server.namenode.NameNode.initialize(NameNode.java:334)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNo= de.<init>(NameNode.java:458)
> =A0=A0=A0=A0=A0=A0=A0 at org.apa= che.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:450)> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameN= ode.createNameNode(NameNode.java:751)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNo= de.main(NameNode.java:799)
> Caused by: java.net.SocketException: Unr= esolved address
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.Net.translateTo= SocketException(Net.java:58)
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.Net.translateException(Net.java:84= )
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.Net.translateException(Net.ja= va:90)
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.ServerSocketAdaptor.bind= (ServerSocketAdaptor.java:61)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.h= adoop.ipc.Server.bind(Server.java:229)
> =A0=A0=A0=A0=A0=A0=A0 ... 14 more
> Caused by: java.nio.channels= .UnresolvedAddressException
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.Net= .checkAddress(Net.java:30)
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.Serv= erSocketChannelImpl.bind(ServerSocketChannelImpl.java:122)
> =A0=A0=A0=A0=A0=A0=A0 at sun.nio.ch.ServerSocketAdaptor.bind(ServerSoc= ketAdaptor.java:59)
> =A0=A0=A0=A0=A0=A0=A0 ... 15 more
>
&g= t; Regards,
> Praveen
>
> On Wed, Jan 11, 2012 at 12:24 P= M, Praveen Sripati <praveensripati@gmail.com> wrote:
>
> Hi,
>
> I am trying to setup a HDFS federation and= getting the below error. Also, pasted the core-site.xml and hdfs-site.xml = at the bottom of the mail. Did I miss something in the configuration files?=
>
> 2012-01-11 12:12:15,759 ERROR namenode.NameNode (NameNode.j= ava:main(803)) - Exception in namenode join
> java.lang.IllegalArgume= ntException: Can't parse port ''
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.net.NetUtils.createSocketAd= dr(NetUtils.java:198)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.net.NetUtils.createSocketAd= dr(NetUtils.java:153)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hd= fs.server.namenode.NameNode.getAddress(NameNode.java:174)
> =A0=A0=A0= =A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(= NameNode.java:228)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNo= de.getAddress(NameNode.java:205)
> =A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:26= 6)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.= NameNode.loginAsNameNodeUser(NameNode.java:317)
> =A0=A0=A0=A0=A0=A0=A0 at org.apache.hadoop.hdfs.server.namenode.NameNo= de.initialize(NameNode.java:329)
> =A0=A0=A0=A0=A0=A0=A0 at org.apach= e.hadoop.hdfs.server.namenode.N

--f46d040168b985c11d04b6436854--