Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 61F50F2BD for ; Sun, 28 Apr 2013 17:05:46 +0000 (UTC) Received: (qmail 68918 invoked by uid 500); 28 Apr 2013 17:05:41 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 68796 invoked by uid 500); 28 Apr 2013 17:05:40 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 68778 invoked by uid 99); 28 Apr 2013 17:05:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 28 Apr 2013 17:05:40 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_FONT_SIZE_LARGE,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of dwivedishashwat@gmail.com designates 209.85.217.180 as permitted sender) Received: from [209.85.217.180] (HELO mail-lb0-f180.google.com) (209.85.217.180) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 28 Apr 2013 17:05:36 +0000 Received: by mail-lb0-f180.google.com with SMTP id v1so265138lbd.11 for ; Sun, 28 Apr 2013 10:05:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=tvO/Q7HvEC0PuOkXYvVqVc1zRL026R8wZ4hwKEa/qFY=; b=QxJwwhJXDybGuNOGCzTCtJVbLs3Du5Q4HchWjBwLLlOoLXVzJXgBz0+LxL1rUbedwV YPN2C1CzqN0urjUGq3GXWRygz+ayfHRDWrxlGSqzJwVJZnTqCAGzx8OnBx0msxVbj4Rv FVGbxk6pcjVz4jNBvvouUwaBukc2lTsGql5u14Ab4PYu4eeimiHrDC0+8elZmW0RHXvg KE9N+qdtkqfeWfvHov2ihEeJvAC7rPNUSKBJcJdR2S2nNpUuZ7WEM682XtiTjK5VoDvM tCAsKAMCuRAVluEI2Fy293riJieVKYLU8aShDNvZzSM77XRNnYv1TFk0gTvh1RGrhjei sB9w== X-Received: by 10.152.88.2 with SMTP id bc2mr20830181lab.1.1367168715073; Sun, 28 Apr 2013 10:05:15 -0700 (PDT) MIME-Version: 1.0 Received: by 10.114.93.232 with HTTP; Sun, 28 Apr 2013 10:04:34 -0700 (PDT) In-Reply-To: References: <4bd2eeb2.5e5c7e.13e4824c75d.Webtop.48@charter.net> From: shashwat shriparv Date: Sun, 28 Apr 2013 22:34:34 +0530 Message-ID: Subject: Re: M/R job to a cluster? To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=001a11c367009c306004db6ec5e9 X-Virus-Checked: Checked by ClamAV on apache.org --001a11c367009c306004db6ec5e9 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable check in namenode:50030 if it appears there its not running in localmode else it is *Thanks & Regards * =E2=88=9E Shashwat Shriparv On Sun, Apr 28, 2013 at 1:18 AM, sudhakara st wrote= : > Hello Kevin, > > In the case: > > JobClient client =3D new JobClient(); > JobConf conf - new JobConf(WordCount.class); > > Job client(default in local system) picks configuration information by > referring HADOOP_HOME in local system. > > if your job configuration like this: > *Configuration conf =3D new Configuration();* > *conf.set("fs.default.name", "hdfs://name_node:9000");* > *conf.set("mapred.job.tracker", "job_tracker_node:9001");* > > It pickups configuration information by referring HADOOP_HOME in > specified namenode and job tracker. > > Regards, > Sudhakara.st > > > On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burton w= rote: > >> It is hdfs://devubuntu05:9000. Is this wrong? Devubuntu05 is the name of >> the host where the NameNode and JobTracker should be running. It is also >> the host where I am running the M/R client code. >> >> On Apr 26, 2013, at 4:06 PM, Rishi Yadav wrote: >> >> check core-site.xml and see value of fs.default.name. if it has >> localhost you are running locally. >> >> >> >> >> On Fri, Apr 26, 2013 at 1:59 PM, wrote: >> >>> I suspect that my MapReduce job is being run locally. I don't have any >>> evidence but I am not sure how the specifics of my configuration are >>> communicated to the Java code that I write. Based on the text that I ha= ve >>> read online basically I start with code like: >>> >>> JobClient client =3D new JobClient(); >>> JobConf conf - new JobConf(WordCount.class); >>> . . . . . >>> >>> Where do I communicate the configuration information so that the M/R jo= b >>> runs on the cluster and not locally? Or is the configuration location >>> "magically determined"? >>> >>> Thank you. >>> >> >> > > > -- > > Regards, > ..... Sudhakara.st > > --001a11c367009c306004db6ec5e9 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
check in namenode:50030 if it appears there its not runnin= g in localmode else it is

Thanks & Regards =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20 =09 =09 =09 =09

<= font size=3D"6">=E2=88=9E

Shash= wat Shriparv



On Sun, Apr 28, 2013 at 1:18 AM, sudhaka= ra st <sudhakara.st@gmail.com> wrote:
Hello Kevin,

In the case:

JobClient client = =3D new JobClient();
JobConf conf - new JobConf(WordC= ount.class);

Job client(default in local system) picks=C2=A0 configuration informatio= n=C2=A0 by referring HADOOP_HOME in local system.

if your job configuration like thi= s:
Configuration conf =3D new Configuration();
conf.set("fs.d= efault.name", "hdfs://name_node:9000");
conf.set("mapred.job.tracker", "job_tracker_node:9001&quo= t;);

It pickups configuration information=C2=A0 by referring HADOOP_HOME= in specified namenode and job tracker.

Regards,
Sudhakara.st


On Sat, Apr 27, 2013 at 2:52 AM, Kevin Burto= n <rkevinburton@charter.net> wrote:
It is hdfs://devubuntu05:9000. Is this wrong? Devubu= ntu05 is the name of the host where the NameNode and JobTracker should be r= unning. It is also the host where I am running the M/R client code.

On Apr 26, 2013, at 4:06 PM, Rishi Yadav <rishi@infoobjects.com&g= t; wrote:

check core-site.xml and see value of fs.default.name. if it has localhost you are running loc= ally.




On Fri, Apr 26, 2013 at 1:59 PM, <rkevinburton@charter.net> wrote:
I suspect that= my MapReduce job is being run locally. I don't have any evidence but I= am not sure how the specifics of my configuration are communicated to the = Java code that I write. Based on the text that I have read online basically= I start with code like:

JobClient client =3D new JobClient();=
J= obConf conf - new JobConf(WordCount.class);
. . . . .

Where do I communicate the co= nfiguration information so that the M/R job runs on the cluster and not loc= ally? Or is the configuration location "magically determined"?

Thank you.
<= /div>



--
= =C2=A0 =C2=A0 =C2=A0=C2=A0
Regards,
.....=C2=A0 Sudhakara.st
=C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0

--001a11c367009c306004db6ec5e9--