Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 73628DB64 for ; Thu, 1 Nov 2012 11:43:57 +0000 (UTC) Received: (qmail 44310 invoked by uid 500); 1 Nov 2012 11:43:52 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 44172 invoked by uid 500); 1 Nov 2012 11:43:51 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 44139 invoked by uid 99); 1 Nov 2012 11:43:50 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Nov 2012 11:43:50 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of bejoy.hadoop@gmail.com designates 209.85.210.176 as permitted sender) Received: from [209.85.210.176] (HELO mail-ia0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Nov 2012 11:43:42 +0000 Received: by mail-ia0-f176.google.com with SMTP id h11so2095955iae.35 for ; Thu, 01 Nov 2012 04:43:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=1JhRz+KfpXDISddpAAS958yUgYXr8rftDjQCRHTFtmU=; b=KXyR9Ij8pzQxeacZHN7IUBCeSZGUJpT4eFPGgOfiHrqFQ3BPctWIBacI5fn9C9HO5M BP9GlJkzDapIvCImw4i1+j9eGdfyucwMCH9VcV9qEIF0HnlZQFbS5i/XulqIH4TOomFo IcDjTYlcHbqZwtCSZIGtl7yXifPfPQdzLSLj/MW+os8D8Y7cP/Jk4T3C86nH9QpFZZ6V /x2xa+qaux8ayCXBrI88SD0d4VBXHc3IMrHPccSnB2CmHWxxd+Iv401x9FYefWyTFh0i 9bZhiGIKwINrcNVkF56n8TogtFkX1knaLIhm3UwADwx4i88O3ImpJ7hSBYlDarXttesc +HJw== MIME-Version: 1.0 Received: by 10.50.195.168 with SMTP id if8mr835921igc.71.1351770201311; Thu, 01 Nov 2012 04:43:21 -0700 (PDT) Received: by 10.64.138.104 with HTTP; Thu, 1 Nov 2012 04:43:21 -0700 (PDT) In-Reply-To: References: <006C1185901F10488AB4710EB5EE3C020EEECD54@chnshlmbx11> Date: Thu, 1 Nov 2012 17:13:21 +0530 Message-ID: Subject: Re: Map Reduce slot From: Bejoy Ks To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=14dae934102daac84604cd6d86fd X-Virus-Checked: Checked by ClamAV on apache.org --14dae934102daac84604cd6d86fd Content-Type: text/plain; charset=ISO-8859-1 Hi Udippan The number of map/reduce tasks slots on your cluster is the sum of all the slots from your TaskTracker nodes. Based on your node's resource availability you can even configure that on a per node basis. The slots are defined on a node level using the following properties mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum . As Kapil mentioned the total number of slots across the cluster can be obtained from JT web UI. The other 3 properties can be defined on a job level. However in production clustes the jvm size is marked final to prevent abuses that may lead to OOMs. The jvm size of task jvms are defined by 'mapred.child.java.opts' which defaults to 200 Mb and the jvm reuse is defined by 'mapred.job.reuse.jvm.num.tasks' which defaults to 1 task per jvm. HTH. Regards Bejoy KS --14dae934102daac84604cd6d86fd Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hi Udippan

The number of map/reduce tasks slots on your cluster is t= he sum of all the slots from your TaskTracker nodes. Based on your node'= ;s resource availability you can even configure that on a per node basis. T= he slots are defined on a node level using the following properties mapred.= tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum .=

As Kapil mentioned the total number of slots across the cluster can be = obtained from JT web UI.

The other 3 properties can be defined on a = job level. However in production clustes the jvm size is marked final to pr= event abuses that may lead to OOMs.

The jvm size of task jvms are defined by 'mapred.child.java.opts= 9; which defaults to 200 Mb and the jvm reuse is defined by 'mapred.job= .reuse.jvm.num.tasks' which defaults to 1 task per jvm.

HTH.

Regards
Bejoy KS<= br>

--14dae934102daac84604cd6d86fd--