Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id BB21543CE for ; Fri, 27 May 2011 14:54:45 +0000 (UTC) Received: (qmail 61582 invoked by uid 500); 27 May 2011 14:54:45 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 61545 invoked by uid 500); 27 May 2011 14:54:45 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 61537 invoked by uid 99); 27 May 2011 14:54:45 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 May 2011 14:54:44 +0000 X-ASF-Spam-Status: No, hits=3.3 required=5.0 tests=HTML_MESSAGE,NO_RDNS_DOTCOM_HELO,RCVD_IN_DNSWL_NONE,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [216.145.54.171] (HELO mrout1.yahoo.com) (216.145.54.171) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 May 2011 14:54:38 +0000 Received: from sp1-ex07cas01.ds.corp.yahoo.com (sp1-ex07cas01.ds.corp.yahoo.com [216.252.116.137]) by mrout1.yahoo.com (8.14.4/8.14.4/y.out) with ESMTP id p4REs2f9047142 for ; Fri, 27 May 2011 07:54:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=yahoo-inc.com; s=cobra; t=1306508043; bh=c69cxMdb003JTuH+xlWM8jPE9tU6ZFe6OSiM8TYZBLE=; h=From:To:Date:Subject:Message-ID:In-Reply-To:Content-Type: MIME-Version; b=IAduK15oZXskkmLEhiIDcJ2KDrvdEPdGV7yCziEC7BzZA+jzl2oBhOXhlBoonanoh TAJwA19DvYfCIFQ1ltvketjACtS5C4+DZe0C8m6EMJv+W/X10SiTP+W87Mg7nY6I1c +by0fQM/fh+OpHYfS/6rgAkjO55dOsw/fypnf2y4= Received: from SP1-EX07VS02.ds.corp.yahoo.com ([216.252.116.135]) by sp1-ex07cas01.ds.corp.yahoo.com ([216.252.116.137]) with mapi; Fri, 27 May 2011 07:54:03 -0700 From: Robert Evans To: "mapreduce-user@hadoop.apache.org" Date: Fri, 27 May 2011 07:54:00 -0700 Subject: Re: How does Map and Reduce class are sent to remote node by hadoop ?? Thread-Topic: How does Map and Reduce class are sent to remote node by hadoop ?? Thread-Index: AcwceOR5XJdVpaaAQWuhu1VVOtH5ewABQT6o Message-ID: In-Reply-To: Accept-Language: en-US Content-Language: en X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US Content-Type: multipart/alternative; boundary="_000_CA05253823E52evansyahooinccom_" MIME-Version: 1.0 --_000_CA05253823E52evansyahooinccom_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Francesco, The mapreduce client will create a jar called job.jar and place it in HDFS = in a staging directory. This is the jar that you specified to your job con= f, or I believe that it tries to guess the jar based off of the Mapper clas= s and the Reducer class but I am not sure of that. Once the job tracker ha= s told a TaskTracker to run a given job the TaskTracker will download the j= ar, and then fork off a new JVM to execute the Mapper or Reducer. If you j= ar has dependencies then these usually have to be shipped with it as part o= f the cache archive interface. --Bobby Evans On 5/27/11 9:16 AM, "Francesco De Luca" wrote: Anyone knows the mechanism that hadoop use to load Map and Reduce class on = the remote node where the JobTracker submit the tasks? In particular, how can hadoop retrieves the .class files ? Thanks --_000_CA05253823E52evansyahooinccom_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Re: How does Map and Reduce class are sent to remote node by hadoop = ?? Francesco,

The mapreduce client will create a jar called job.jar and place it in HDFS = in a staging directory.  This is the jar that you specified to your jo= b conf, or I believe that it tries to guess the jar based off of the Mapper= class and the Reducer class but I am not sure of that.  Once the job = tracker has told a TaskTracker to run a given job the TaskTracker will down= load the jar, and then fork off a new JVM to execute the Mapper or Reducer.=  If you jar has dependencies then these usually have to be shipped wi= th it as part of the cache archive interface.

--Bobby Evans

On 5/27/11 9:16 AM, "Francesco De Luca" <f.deluca86@gmail.com> wrote:

Anyone knows the mechanism that hadoop use = to load Map and Reduce class on the remote node
where the JobTracker submit the tasks?

In particular, how can hadoop retrieves the .class files ?

Thanks

--_000_CA05253823E52evansyahooinccom_--