Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 98379 invoked from network); 16 Sep 2009 05:55:32 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 16 Sep 2009 05:55:32 -0000 Received: (qmail 93660 invoked by uid 500); 16 Sep 2009 05:55:26 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 93531 invoked by uid 500); 16 Sep 2009 05:55:26 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 93507 invoked by uid 99); 16 Sep 2009 05:55:26 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Sep 2009 05:55:26 +0000 X-ASF-Spam-Status: No, hits=2.0 required=10.0 tests=NO_RDNS_DOTCOM_HELO,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [216.145.54.171] (HELO mrout1.yahoo.com) (216.145.54.171) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 Sep 2009 05:55:13 +0000 Received: from EGL-EX07CAS01.ds.corp.yahoo.com (egl-ex07cas01.eglbp.corp.yahoo.com [203.83.248.208]) by mrout1.yahoo.com (8.13.6/8.13.6/y.out) with ESMTP id n8G5sk40010599; Tue, 15 Sep 2009 22:54:47 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; s=serpent; d=yahoo-inc.com; c=nofws; q=dns; h=received:from:to:date:subject:thread-topic:thread-index: message-id:references:in-reply-to:accept-language: content-language:x-ms-has-attach:x-ms-tnef-correlator:acceptlanguage: content-type:content-transfer-encoding:mime-version; b=Pjm62foivRm6PPq8w5c8d9MxmagfHNC121qPCAiX7IgaV+cZi+Iu5Hfo3hxXGqrB Received: from EGL-EX07VS01.ds.corp.yahoo.com ([203.83.248.206]) by EGL-EX07CAS01.ds.corp.yahoo.com ([203.83.248.215]) with mapi; Wed, 16 Sep 2009 11:24:46 +0530 From: Amogh Vasekar To: "common-user@hadoop.apache.org" , "core-user@hadoop.apache.org" Date: Wed, 16 Sep 2009 11:23:14 +0530 Subject: RE: about hadoop jvm allocation in job excution Thread-Topic: about hadoop jvm allocation in job excution Thread-Index: Aco2KWRkpyKK+sbLSduwTz6Hoc2pzwAZ9GLQ Message-ID: <616DA47B2EF5B944B91846785B512FF4CFC727F9EA@EGL-EX07VS01.ds.corp.yahoo.com> References: <25458201.post@talk.nabble.com> In-Reply-To: <25458201.post@talk.nabble.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Virus-Checked: Checked by ClamAV on apache.org Hi, Funny enough was looking at it just yesterday. http://hadoop.apache.org/common/docs/r0.20.0/mapred_tutorial.html#Task+JVM+= Reuse Thanks, Amogh -----Original Message----- From: Zhimin [mailto:wangzm@cs.umb.edu]=20 Sent: Tuesday, September 15, 2009 10:53 PM To: core-user@hadoop.apache.org Subject: about hadoop jvm allocation in job excution We have a project which needs to support similarity queries against items from a huge amount of data. One approach we have tried is to use Hbase as the data repository and Hadoop as the query execution engine. We adopted Hadoop because Map-Reduce is a very good model of our underlying task and the programming was straightforward. However, we found that Hadoop will always allocate a new JVM for each individual task on a node. This is inefficient for us because in our case the whole Hadoop platform is dedicated to some relatively stable parametrized querries, and security an= d strict isolation of different tasks is not our main concern. To save the task setup time, I wonder if there are some existing mechanism in Hadoop or some extension of Hadoop in other open source projects that can let us reside our classes in a JVM on the job node, with task nodes waiting for requests. =20 --=20 View this message in context: http://www.nabble.com/about-hadoop-jvm-alloca= tion-in-job-excution-tp25458201p25458201.html Sent from the Hadoop core-user mailing list archive at Nabble.com.