Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2AE83ED09 for ; Tue, 19 Feb 2013 16:27:20 +0000 (UTC) Received: (qmail 69143 invoked by uid 500); 19 Feb 2013 16:26:48 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 63019 invoked by uid 500); 19 Feb 2013 16:26:13 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 60469 invoked by uid 99); 19 Feb 2013 16:26:00 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Feb 2013 16:26:00 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harsh@cloudera.com designates 209.85.210.177 as permitted sender) Received: from [209.85.210.177] (HELO mail-ia0-f177.google.com) (209.85.210.177) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Feb 2013 16:25:53 +0000 Received: by mail-ia0-f177.google.com with SMTP id o25so1449113iad.8 for ; Tue, 19 Feb 2013 08:25:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type:content-transfer-encoding :x-gm-message-state; bh=gf3J4vPKLCf2NuxAQvJDYTNgJMMnnTIXkWN0MQCAUzU=; b=GZGI9D8lGxlIpstFvAcw3Qytdmkr6j38jGoHvkQ43mXhwYp9iay/tEmsqnJX7zhC/q TVEoCBEAy9yL6aPeWzwwKtakUodaTlM6byPoRBLyNBCYPiFlJiTKWpKzcSes5TbU1dAu fNBFuyT0MdK+63Cg7L+cZRGET49QQ9F+XX4WxJ2KAxBHqPfFznHRXPQQ2s3n/JkUNsKy gT99EC2YQ3n+kC6dyqgsuAtYlRIVDvXUpcAFr6CtXIfo6JPJsPd2BXdzrPXyWTBoJeZJ MalkrJIcEIGK/2PvZYq6Y1E5Tp+OVTeJXa52ECnXC5BVKtBOVp1QV/S2pSBTJjkelP4C 7a9Q== X-Received: by 10.50.53.143 with SMTP id b15mr9779471igp.69.1361291132187; Tue, 19 Feb 2013 08:25:32 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.104.229 with HTTP; Tue, 19 Feb 2013 08:25:11 -0800 (PST) In-Reply-To: References: <86DF8063-3F77-4D5D-B5BD-7DC9648CD367@keithwiley.com> From: Harsh J Date: Tue, 19 Feb 2013 21:55:11 +0530 Message-ID: Subject: Re: Namenode formatting problem To: "" Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Gm-Message-State: ALoCoQnh7qXEB9BbYtFLjIy+wEHRxXe537N1cTbiw/3ptXTR5Wa0yJ+7d5PXzEekW4hHrS1IUQvI X-Virus-Checked: Checked by ClamAV on apache.org To simplify my previous post, your IPs for the master/slave/etc. in /etc/hosts file should match the ones reported by "ifconfig" always. In proper deployments, IP is static. If IP is dynamic, we'll need to think of some different ways. On Tue, Feb 19, 2013 at 9:53 PM, Harsh J wrote: > Hey Keith, > > I'm guessing whatever "ip-13-0-177-110" is resolving to (ping to > check), is not what is your local IP on that machine (or rather, it > isn't the machine you intended to start it on)? > > Not sure if EC2 grants static IPs, but otherwise a change in the > assigned IP (checkable via ifconfig) would probably explain the > "Cannot assign" error received when we tried a bind() syscall. > > On Tue, Feb 19, 2013 at 4:30 AM, Keith Wiley wrot= e: >> This is Hadoop 2.0. Formatting the namenode produces no errors in the s= hell, but the log shows this: >> >> 2013-02-18 22:19:46,961 FATAL org.apache.hadoop.hdfs.server.namenode.Nam= eNode: Exception in namenode join >> java.net.BindException: Problem binding to [ip-13-0-177-110:9212] java.n= et.BindException: Cannot assign requested address; For more details see: h= ttp://wiki.apache.org/hadoop/BindException >> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:71= 0) >> at org.apache.hadoop.ipc.Server.bind(Server.java:356) >> at org.apache.hadoop.ipc.Server$Listener.(Server.java:454) >> at org.apache.hadoop.ipc.Server.(Server.java:1833) >> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:866) >> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(Protobu= fRpcEngine.java:375) >> at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpc= Engine.java:350) >> at org.apache.hadoop.ipc.RPC.getServer(RPC.java:695) >> at org.apache.hadoop.ipc.RPC.getServer(RPC.java:684) >> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:238) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServ= er(NameNode.java:452) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(Na= meNode.java:434) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNo= de.java:608) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNo= de.java:589) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNod= e(NameNode.java:1140) >> at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode= .java:1204) >> 2013-02-18 22:19:46,988 INFO org.apache.hadoop.util.ExitUtil: Exiting wi= th status 1 >> 2013-02-18 22:19:46,990 INFO org.apache.hadoop.hdfs.server.namenode.Name= Node: SHUTDOWN_MSG: >> /************************************************************ >> SHUTDOWN_MSG: Shutting down NameNode at ip-13-0-177-11/127.0.0.1 >> ************************************************************/ >> >> No java processes begin (although I wouldn't expect formatting the namen= ode to start any processes, only starting the namenode or datanode should d= o that), and "hadoop fs -ls /" gives me this: >> >> ls: Call From [CLIENT_HOST]/127.0.0.1 to [MASTER_HOST]:9000 failed on co= nnection exception: java.net.ConnectException: Connection refused; For more= details see: http://wiki.apache.org/hadoop/ConnectionRefused >> >> My /etc/hosts looks like this: >> 127.0.0.1 localhost localhost.localdomain CLIENT_HOST >> MASTER_IP MASTER_HOST master >> SLAVE_IP SLAVE_HOST slave01 >> >> This is on EC2. All of the nodes are in the same security group and the= security group has full inbound access. I can ssh between all three machi= nes (client/master/slave) without a password ala authorized_keys. I can pi= ng the master node from the client machine (although I don't know how to pi= ng a specific port, such as the hdfs port (9000)). Telnet doesn't behave o= n EC2 which makes port testing a little difficult. >> >> Any ideas? >> >> ________________________________________________________________________= ________ >> Keith Wiley kwiley@keithwiley.com keithwiley.com music.keithw= iley.com >> >> "The easy confidence with which I know another man's religion is folly t= eaches >> me to suspect that my own is also." >> -- Mark Twain >> ________________________________________________________________________= ________ >> > > > > -- > Harsh J -- Harsh J