Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id C255DFF71 for ; Thu, 25 Apr 2013 14:27:08 +0000 (UTC) Received: (qmail 12486 invoked by uid 500); 25 Apr 2013 14:27:03 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 12374 invoked by uid 500); 25 Apr 2013 14:27:02 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 12367 invoked by uid 99); 25 Apr 2013 14:27:02 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 Apr 2013 14:27:02 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_FONT_SIZE_LARGE,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of ju.han.felix@gmail.com designates 209.85.210.180 as permitted sender) Received: from [209.85.210.180] (HELO mail-ia0-f180.google.com) (209.85.210.180) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 Apr 2013 14:26:58 +0000 Received: by mail-ia0-f180.google.com with SMTP id t4so1752378iag.25 for ; Thu, 25 Apr 2013 07:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=BOxrBMQwg1nhgd7jXRZGK+JsVNlNZbb0URmzj6ErhPw=; b=O1ptDPQOJbJ3zlQ8upHsGMghpuEmHX7N4r9IPFJBiyJsHE+KSRGXj02J0uLHAZAf2v p4YX0DP5pqpl8byF1Z2PR4Cz0jtGbLr8hG3xJZ1piHhfNMC/F4XYznOLIbir1DHu9r0E 3CgDa8n4r1u58pCyvBOzdaLeOB2GChSgf3Qt/4TVtcRnhXgeQmYlS5NfpJmOyecXnudT c359ju1Y46fFEkyIweUjWrwTw9PZm4uXONXdBIJsGAf0xJrj7RGw/lsOFaG5sxigzzCw uYVJUT+OF3XgFszfg/wXtsbvBRNQGqm4sn7/SddbnCu2ZMR2Ox0arXBjx3V1Ibo4hRZN tg0Q== MIME-Version: 1.0 X-Received: by 10.50.130.83 with SMTP id oc19mr26172795igb.29.1366899997950; Thu, 25 Apr 2013 07:26:37 -0700 (PDT) Received: by 10.64.249.101 with HTTP; Thu, 25 Apr 2013 07:26:37 -0700 (PDT) In-Reply-To: References: Date: Thu, 25 Apr 2013 16:26:37 +0200 Message-ID: Subject: Re: Job launch from eclipse From: Han JU To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b418e99d2682204db30347c X-Virus-Checked: Checked by ClamAV on apache.org --047d7b418e99d2682204db30347c Content-Type: text/plain; charset=GB2312 Content-Transfer-Encoding: quoted-printable Thanks Shashwat and Mohammad. I'm exporting jars and run that with hadoop jar, but I think we should have better ways. I've tried a lot but launch in Eclipse just doesn't work. I don't really want to hard code jobtracker or hdfs information in my code. Maybe it's a bug in hadoop eclipse plugin? I'm using that of hadoop 1.0.2, do we have newer versions? Thanks. 2013/4/23 Mohammad Tariq > Hell Han, > > The reason behind this is that the jobs are running inside the > Eclipse itself and not getting submitted to your cluster. Please see if > this links helps : > http://cloudfront.blogspot.in/2013/03/mapreduce-jobs-running-through-ecli= pse.html#.UXaQsDWH6IQ > > > Warm Regards, > Tariq > https://mtariq.jux.com/ > cloudfront.blogspot.com > > > On Tue, Apr 23, 2013 at 6:56 PM, shashwat shriparv < > dwivedishashwat@gmail.com> wrote: > >> You need to generate a jar file, pass all the parameters on run time if >> any is fixed and run at hadoop like hadoop -jar jarfilename.jar >> >> *Thanks & Regards * >> >> =A1=DE >> Shashwat Shriparv >> >> >> >> On Tue, Apr 23, 2013 at 6:51 PM, Han JU wrote: >> >>> Hi, >>> >>> I'm getting my hands on hadoop. One thing I really want to know is how >>> you launch MR jobs in a development environment. >>> >>> I'm currently using Eclipse 3.7 with hadoop plugin from hadoop 1.0.2. >>> With this plugin I can manage HDFS and submit job to cluster. But the >>> strange thing is, every job launch from Eclipse in this way is not reco= rded >>> by the jobtracker (can't monitor it from web UI). But finally the outpu= t >>> appears in HDFS path as the parameter I gave. It's really strange that >>> makes me think it's a standalone job run then it writes output to HDFS. >>> >>> So how do you code and launch jobs to cluster? >>> >>> Many thanks. >>> >>> -- >>> *JU Han* >>> >>> UTC - Universit=A8=A6 de Technologie de Compi=A8=A8gne >>> * **GI06 - Fouille de Donn=A8=A6es et D=A8=A6cisionnel* >>> >>> +33 0619608888 >>> >> >> > --=20 *JU Han* Software Engineer Intern @ KXEN Inc. UTC - Universit=A8=A6 de Technologie de Compi=A8=A8gne * **GI06 - Fouille de Donn=A8=A6es et D=A8=A6cisionnel* +33 0619608888 --047d7b418e99d2682204db30347c Content-Type: text/html; charset=GB2312 Content-Transfer-Encoding: quoted-printable
Thanks Shashwat and Mohammad.
I'm exporting jars a= nd run that with hadoop jar, but I think we should have better ways.
<= div>I've tried a lot but launch in Eclipse just doesn't work. I don= 't really want to hard code jobtracker or hdfs information in my code.<= /div>
Maybe it's a bug in hadoop eclipse plugin? I'm using tha= t of hadoop 1.0.2, do we have newer versions?

Thanks.


2013/4/= 23 Mohammad Tariq <dontariq@gmail.com>
Hell Han,

      The reas= on behind this is that the jobs are running inside the Eclipse itself and n= ot getting submitted to your cluster. Please see if this links helps : = ;http://cloudfront.blogsp= ot.in/2013/03/mapreduce-jobs-running-through-eclipse.html#.UXaQsDWH6IQ<= /div>



On Tue, Apr 23, 2013 at 6:56 PM, shashwa= t shriparv <dwivedishashwat@gmail.com> wrote:
You need to generate a jar file, pass all the parameters o= n run time if any is fixed and run at hadoop like hadoop -jar jarfilename.j= ar <parameters>

Thanks & Regards   &n= bsp;       =20 =09 =09 =09 =09

<= font size=3D"6">=A1=DE

Shash= wat Shriparv

<= div>


On Tue, Apr 23, 2013 at 6:51 PM, Han JU = <ju.han.felix@gmail.com> wrote:
Hi,

I'm getting my hands on hadoop.= One thing I really want to know is how you launch MR jobs in a develo= pment environment.

I'm currently using Ec= lipse 3.7 with hadoop plugin from hadoop 1.0.2. With this plugin I can mana= ge HDFS and submit job to cluster. But the strange thing is, every job laun= ch from Eclipse in this way is not recorded by the jobtracker (can't mo= nitor it from web UI). But finally the output appears in HDFS path as the p= arameter I gave. It's really strange that makes me think it's a sta= ndalone job run then it writes output to HDFS.

So how do you code and launch jobs to cluster?

Many thanks.

--
JU Han

UTC   - &= nbsp;Universit=A8=A6 de Technologie de C= ompi=A8=A8gne
 &= nbsp;   GI06 -= Fouille de Donn=A8=A6es et D=A8=A6cisionnel






--
JU Han

=
Software E= ngineer Intern @ KXEN Inc.
UTC   -  = ;Universit=A8=A6 de Technologie de Compi= =A8=A8gne
  = ;   GI06 - Fou= ille de Donn=A8=A6es et D=A8=A6cisionnel

+33 0619608888
--047d7b418e99d2682204db30347c--