Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7D3D710348 for ; Tue, 13 Aug 2013 16:08:09 +0000 (UTC) Received: (qmail 41781 invoked by uid 500); 13 Aug 2013 16:08:03 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 41456 invoked by uid 500); 13 Aug 2013 16:08:03 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 41449 invoked by uid 99); 13 Aug 2013 16:08:03 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Aug 2013 16:08:03 +0000 X-ASF-Spam-Status: No, hits=-0.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of pavan0591@gmail.com designates 209.85.219.49 as permitted sender) Received: from [209.85.219.49] (HELO mail-oa0-f49.google.com) (209.85.219.49) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 13 Aug 2013 16:07:58 +0000 Received: by mail-oa0-f49.google.com with SMTP id n10so688366oag.22 for ; Tue, 13 Aug 2013 09:07:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=2KX0+lwRxcL0JpftkMsImgMi175JiaXZjJls3jxu8g8=; b=OsvQqe3Q4WcXZBJk0pc+aauK6d+JIQaq/nrLjE+WWl0VJPV/h6JoXwyIkqwSpfqTHR KJ3RCXV1O/j5Hcu6sm1n+YqhQQ2yGJfJpsPHSql4Cz1fCZQfFWZMRqyeTyBiqT1jOFZd n4h9wjxtxjLR+R8Dms9xhSg4hbcJSVWK0cDjKReynRus6KQTq3nRnPbEBs/7jTcOLBf9 /7vmOUPbjSaf/h2TA7IoQH3F42zXump4Y8fLpVvADbUoEopUNAJvQ2N9X01XY2Aue6Zl NKeY9Yw7ziMl7o926tjMHa9NpHt2HOPuSXkEawHVJFv5GvIGzLY3+AmHSYzsOys+fz64 bgQg== MIME-Version: 1.0 X-Received: by 10.60.60.5 with SMTP id d5mr5063689oer.0.1376410057723; Tue, 13 Aug 2013 09:07:37 -0700 (PDT) Received: by 10.182.102.41 with HTTP; Tue, 13 Aug 2013 09:07:37 -0700 (PDT) In-Reply-To: References: <6AFAB94D-055C-446E-847F-5D948AFC9134@cloudera.com> Date: Tue, 13 Aug 2013 21:37:37 +0530 Message-ID: Subject: Re: Maven Cloudera Configuration problem From: Pavan Sudheendra To: user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org When i actually run the job on the multi node cluster, logs shows it uses localhost configurations which i don't want.. I just have a pom.xml which lists all the dependencies like standard hadoop, standard hbase, standard zookeeper etc., Should i remove these dependencies? I want the cluster settings to apply in my map-reduce application.. So, this is where i'm stuck at.. On Tue, Aug 13, 2013 at 9:30 PM, Pavan Sudheendra wro= te: > Hi Shabab and Sandy, > The thing is we have a 6 node cloudera cluster running.. For > development purposes, i was building a map-reduce application on a > single node apache distribution hadoop with maven.. > > To be frank, i don't know how to deploy this application on a multi > node cloudera cluster. I am fairly well versed with Multi Node Apache > Hadoop Distribution.. So, how can i go forward? > > Thanks for all the help :) > > On Tue, Aug 13, 2013 at 9:22 PM, wrote: >> Hi Pavan, >> >> Configuration properties generally aren't included in the jar itself unl= ess you explicitly set them in your java code. Rather they're picked up fro= m the mapred-site.xml file located in the Hadoop configuration directory on= the host you're running your job from. >> >> Is there an issue you're coming up against when trying to run your job o= n a cluster? >> >> -Sandy >> >> (iphnoe tpying) >> >> On Aug 13, 2013, at 4:19 AM, Pavan Sudheendra wrot= e: >> >>> Hi, >>> I'm currently using maven to build the jars necessary for my >>> map-reduce program to run and it works for a single node cluster.. >>> >>> For a multi node cluster, how do i specify my map-reduce program to >>> ingest the cluster settings instead of localhost settings? >>> I don't know how to specify this using maven to build my jar. >>> >>> I'm using the cdh distribution by the way.. >>> -- >>> Regards- >>> Pavan > > > > -- > Regards- > Pavan --=20 Regards- Pavan