Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 7371 invoked from network); 27 Aug 2009 16:11:37 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 27 Aug 2009 16:11:37 -0000 Received: (qmail 80389 invoked by uid 500); 27 Aug 2009 16:11:35 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 80345 invoked by uid 500); 27 Aug 2009 16:11:35 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Delivered-To: moderator for common-user@hadoop.apache.org Received: (qmail 55736 invoked by uid 99); 27 Aug 2009 14:51:53 -0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of vklimontovich@iponweb.net designates 209.85.219.209 as permitted sender) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:subject:mime-version :content-type:from:in-reply-to:date:cc:content-transfer-encoding :message-id:references:to:x-mailer; bh=+Cj4yWt3cVS79HXHo+zwpNfGOomcIdPNCU0LPz3K9I0=; b=bWHn3SKXJ5diPuhLvuWSvaSX1ZrmiCCuRvVCToOTlvTR6zRY3B2NFuvDGOHPcoGpvN vsiB9NOmOvHPI9q4D2myygR1ACtNv+vM7C2Xp6STwO/ttFGnpAIfpgv/OQ+MdYMde5Mu F8EqMlJGmnUB65a1mJPjBz6Ld7r/WLdmP45hg= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; b=ZH1dUxbDga3pw3zozuTPXogOdKVsMLqYC8QbOXWpV6qbLsZHRJDh3SMa6f7FgB7yp6 sUu0b2Jz+VyYMZj5nFY/AC3Mw8VDDyV3q8IKvYFLFZD98RvyminOd91HwiQp7Z69Cpyc OC7EwnIhs/icprp0g/1WcRLvhjw2HkaFjbWfc= Subject: Re: How running hadoop without command line? Mime-Version: 1.0 (Apple Message framework v1075.2) Content-Type: text/plain; charset=us-ascii; format=flowed; delsp=yes From: Vladimir Klimontovich In-Reply-To: <25167322.post@talk.nabble.com> Date: Thu, 27 Aug 2009 18:51:18 +0400 Cc: core-user@hadoop.apache.org Content-Transfer-Encoding: 7bit Message-Id: <5BCBDA7F-DED4-4B9D-B0AA-BB0A5E9D67C9@gmail.com> References: <25167322.post@talk.nabble.com> To: common-user@hadoop.apache.org X-Mailer: Apple Mail (2.1075.2) X-Virus-Checked: Checked by ClamAV on apache.org There is a class called JobClient. You can run jobs using these class from java app. Also hadoop jar adds $HADOOP_HOME/conf/hadoop-site.xml and $HADOOP_HOME/conf/hadoop-conf.xml xml to classpath, so JobClient already knows which jobtracker and FS should be used for running map process. So when you are running hadoop jobs inside other application you should take care of setting valid properties for jobtracker address, FS address and etc. You can set it manually or use Configuratio.addResource for adding $HADOOP_HOME/conf/hadoop-conf.xml and $HADOOP_HOME/conf/hadoop- site.xml . On Aug 27, 2009, at 1:06 PM, radar.sxl wrote: > > When run a hadoop project jar, We'll use > $ bin/hadoop jar ***.jar > but if I hava a java app, i want to run mapreduce job inside it use > hadoop > cluster, is three anyway? > -- > View this message in context: http://www.nabble.com/How-running-hadoop-without-command-line--tp25167322p25167322.html > Sent from the Hadoop core-user mailing list archive at Nabble.com. > --- Vladimir Klimontovich, skype: klimontovich GoogleTalk/Jabber: klimontovich@gmail.com Cell phone: +7926 890 2349