Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6252360D2 for ; Wed, 1 Jun 2011 15:55:33 +0000 (UTC) Received: (qmail 8343 invoked by uid 500); 1 Jun 2011 15:55:30 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 8305 invoked by uid 500); 1 Jun 2011 15:55:30 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 8297 invoked by uid 99); 1 Jun 2011 15:55:30 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Jun 2011 15:55:30 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of gordoslocos@gmail.com designates 74.125.82.48 as permitted sender) Received: from [74.125.82.48] (HELO mail-ww0-f48.google.com) (74.125.82.48) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 01 Jun 2011 15:55:25 +0000 Received: by wwi18 with SMTP id 18so6501348wwi.29 for ; Wed, 01 Jun 2011 08:55:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=19kbD0NWhXYNoiRxRrsY+MuO4mdMYhjNN/I9NebaNlE=; b=VHUl8elVz0pQcaYbo3MEFO2RC52BknGzkDwzRIVKmt03nbMrLa5QurU0y4YUWMNHQR PY+C9Q63ve4wUz/9p0uO+3Utl94j7DKTmAmyQOJhjc91xjHCqymLiYPVPL/vmwK7TgOZ 7xa1KrIgMys4fF57F3EpvoHVplgLQDU+/5wLc= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=K2XT7nxtYjW3D5r7MPf1Zzu6aIycapkqbbblauc/KAmMzfj0h3xqgEB1EWfW6W7w9C jlWT5H9OAA+wYig76bTIwZNXmlpHB4DPFdOTdUwwavxVttRtzyMM/NuKOaABEwY2glVl VoGVJO2wg7Q+obmpxcbMlEJp8ruE2EOd0ZYUw= MIME-Version: 1.0 Received: by 10.216.135.131 with SMTP id u3mr4999145wei.71.1306943703798; Wed, 01 Jun 2011 08:55:03 -0700 (PDT) Received: by 10.216.156.148 with HTTP; Wed, 1 Jun 2011 08:55:03 -0700 (PDT) In-Reply-To: <774AF6F1-461B-428C-95DF-E5F2446F565B@gmail.com> References: <774AF6F1-461B-428C-95DF-E5F2446F565B@gmail.com> Date: Wed, 1 Jun 2011 12:55:03 -0300 Message-ID: Subject: Re: Starting JobTracker Locally but binding to remote Address From: "Juan P." To: "common-user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=0016e6de17e034d5d404a4a88bed --0016e6de17e034d5d404a4a88bed Content-Type: text/plain; charset=ISO-8859-1 Joey, I just tried it and it worked great. I configured the entire cluster (added a couple more DataNodes) and I was able to run a simple map/reduce job. Thanks for your help! Pony On Tue, May 31, 2011 at 6:26 PM, gordoslocos wrote: > :D i'll give that a try 1st thing in the morning! Thanks a lot joey!! > > Sent from my iPhone > > On 31/05/2011, at 18:18, Joey Echeverria wrote: > > > The problem is that start-all.sh isn't all that intelligent. The way > > that start-all.sh works is by running start-dfs.sh and > > start-mapred.sh. The start-mapred.sh script always starts a job > > tracker on the local host and a task tracker on all of the hosts > > listed in slaves (it uses SSH to do the remote execution). The > > start-dfs.sh script always starts a name node on the local host, a > > data node on all of the hosts listed in slaves, and a secondary name > > node on all of the hosts listed in masters. > > > > In your case, you'll want to run start-dfs.sh on slave3 and > > start-mapred.sh on slave2. > > > > -Joey > > > > On Tue, May 31, 2011 at 5:07 PM, Juan P. wrote: > >> Hi Guys, > >> I recently configured my cluster to have 2 VMs. I configured 1 > >> machine (slave3) to be the namenode and another to be the > >> jobtracker (slave2). They both work as datanode/tasktracker as well. > >> > >> Both configs have the following contents in their masters and slaves > file: > >> *slave2* > >> *slave3* > >> > >> Both machines have the following contents on their mapred-site.xml file: > >> ** > >> ** > >> * > >> * > >> ** > >> * > >> * > >> ** > >> * * > >> * mapred.job.tracker* > >> * slave2:9001* > >> * * > >> ** > >> > >> Both machines have the following contents on their core-site.xml file: > >> ** > >> ** > >> * > >> * > >> ** > >> * > >> * > >> ** > >> * * > >> * fs.default.name* > >> * hdfs://slave3:9000* > >> * * > >> ** > >> > >> When I log into the namenode and I run the start-all.sh script, > everything > >> but the jobtracker starts. In the log files I get the following > exception: > >> > >> */************************************************************* > >> *STARTUP_MSG: Starting JobTracker* > >> *STARTUP_MSG: host = slave3/10.20.11.112* > >> *STARTUP_MSG: args = []* > >> *STARTUP_MSG: version = 0.20.2* > >> *STARTUP_MSG: build = > >> https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r > >> 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010* > >> *************************************************************/* > >> *2011-05-31 13:54:06,940 INFO org.apache.hadoop.mapred.JobTracker: > Scheduler > >> configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT, > >> limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)* > >> *2011-05-31 13:54:07,086 FATAL org.apache.hadoop.mapred.JobTracker: > >> java.net.BindException: Problem binding to slave2/10.20.11.166:9001 : > Cannot > >> assign requested address* > >> * at org.apache.hadoop.ipc.Server.bind(Server.java:190)* > >> * at > org.apache.hadoop.ipc.Server$Listener.(Server.java:253)* > >> * at org.apache.hadoop.ipc.Server.(Server.java:1026)* > >> * at org.apache.hadoop.ipc.RPC$Server.(RPC.java:488)* > >> * at org.apache.hadoop.ipc.RPC.getServer(RPC.java:450)* > >> * at > org.apache.hadoop.mapred.JobTracker.(JobTracker.java:1595) > >> * > >> * at > >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:183)* > >> * at > >> org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:175)* > >> * at > org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:3702)* > >> *Caused by: java.net.BindException: Cannot assign requested address* > >> * at sun.nio.ch.Net.bind(Native Method)* > >> * at > >> > sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)* > >> * at > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) > >> * > >> * at org.apache.hadoop.ipc.Server.bind(Server.java:188)* > >> * ... 8 more* > >> * > >> * > >> *2011-05-31 13:54:07,096 INFO org.apache.hadoop.mapred.JobTracker: > >> SHUTDOWN_MSG:* > >> */************************************************************* > >> *SHUTDOWN_MSG: Shutting down JobTracker at slave3/10.20.11.112* > >> *************************************************************/* > >> > >> > >> As I see it, from the lines > >> > >> *STARTUP_MSG: Starting JobTracker* > >> *STARTUP_MSG: host = slave3/10.20.11.112* > >> > >> the namenode (slave3) is trying to run the jobtracker locally but when > it > >> starts the jobtracker server it binds it to the slave2 address and of > course > >> fails: > >> > >> *Problem binding to slave2/10.20.11.166:9001* > >> > >> What do you guys think could be going wrong? > >> > >> Thanks! > >> Pony > >> > > > > > > > > -- > > Joseph Echeverria > > Cloudera, Inc. > > 443.305.9434 > --0016e6de17e034d5d404a4a88bed--