Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 8F4F9DB02 for ; Mon, 3 Sep 2012 15:37:20 +0000 (UTC) Received: (qmail 4123 invoked by uid 500); 3 Sep 2012 15:37:15 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 3991 invoked by uid 500); 3 Sep 2012 15:37:15 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 3984 invoked by uid 99); 3 Sep 2012 15:37:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Sep 2012 15:37:15 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of abhay.ratnaparkhi@gmail.com designates 74.125.83.48 as permitted sender) Received: from [74.125.83.48] (HELO mail-ee0-f48.google.com) (74.125.83.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 03 Sep 2012 15:37:07 +0000 Received: by eekd41 with SMTP id d41so2249957eek.35 for ; Mon, 03 Sep 2012 08:36:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Da3dMY0WQ4DAw+ZM8t35mqGjPGKkrZEVrVu9hQd1sZ0=; b=RAPptdx1gWgT+gUtGzNyltN7tU/9Aor//rUTnUMqQTucutEY8vZvBCNMFG7Lg0Hj2H N4NEQU0r3iZL3kdyxLtBJEtfjCs99NIlofHbmLUlfqzXdrFgIfhDAwrLCP45A4HiYB3p 38mw+PFrsDcTP9zmI8HOMcK9M3gZMNAm/XDqNzf+1A5ewCjFrKe9+d6zbyUiG10SRnZH G9Xf1NyxswxSLFkeujOQxVW2HORnv98ygZhiuJdSzItILzEVt788HcpBt3IemuV8t9Rg qfK1uK1SKHTYC71bhk3jmwHL5kSf5M1M0R8tUYbt0ZVyfcnYeCnDWEbdjHhhR5Lt1oaV INmw== MIME-Version: 1.0 Received: by 10.14.204.200 with SMTP id h48mr22358749eeo.7.1346686607154; Mon, 03 Sep 2012 08:36:47 -0700 (PDT) Received: by 10.14.223.136 with HTTP; Mon, 3 Sep 2012 08:36:47 -0700 (PDT) In-Reply-To: References: Date: Mon, 3 Sep 2012 21:06:47 +0530 Message-ID: Subject: Re: knowing the nodes on which reduce tasks will run From: Abhay Ratnaparkhi To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b343958d7df4704c8cde87f --047d7b343958d7df4704c8cde87f Content-Type: text/plain; charset=ISO-8859-1 How can I set 'mapred.tasktracker.reduce.tasks.maximum' to "0" in a running tasktracker? Seems that I need to restart the tasktracker and in that case I'll loose the output of map tasks by particular tasktracker. Can I change 'mapred.tasktracker.reduce.tasks.maximum' to "0" without restarting tasktracker? ~Abhay On Mon, Sep 3, 2012 at 8:53 PM, Bejoy Ks wrote: > HI Abhay > > The TaskTrackers on which the reduce tasks are triggered is chosen in > random based on the reduce slot availability. So if you don't need the > reduce tasks to be scheduled on some particular nodes you need to set > 'mapred.tasktracker.reduce.tasks.maximum' on those nodes to 0. The > bottleneck here is that this property is not a job level one you need to > set it on a cluster level. > > A cleaner approach will be to configure each of your nodes with the right > number of map and reduce slots based on the resources available on each > machine. > > > On Mon, Sep 3, 2012 at 7:49 PM, Abhay Ratnaparkhi < > abhay.ratnaparkhi@gmail.com> wrote: > >> Hello, >> >> How can one get to know the nodes on which reduce tasks will run? >> >> One of my job is running and it's completing all the map tasks. >> My map tasks write lots of intermediate data. The intermediate directory >> is getting full on all the nodes. >> If the reduce task take any node from cluster then It'll try to copy the >> data to same disk and it'll eventually fail due to Disk space related >> exceptions. >> >> I have added few more tasktracker nodes in the cluster and now want to >> run reducer on new nodes only. >> Is it possible to choose a node on which the reducer will run? What's the >> algorithm hadoop uses to get a new node to run reducer? >> >> Thanks in advance. >> >> Bye >> Abhay >> > > --047d7b343958d7df4704c8cde87f Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable How can I set=A0 'mapred.tasktracker.reduce.tasks.maximum'=A0 to "0" in a = running tasktracker?
Seems that I need to restart the tasktracker and i= n that case I'll loose the output of map tasks by particular tasktracke= r.

Can I change=A0 =A0'mapred.tasktracker.reduce.tasks.maximum'=A0 to "0"=A0= without restarting tasktracker?

~Abhay

On Mon, Sep 3, 2012 at 8:53 PM, Bejoy Ks <be= joy.hadoop@gmail.com> wrote:
HI Abhay

The TaskTrackers= on which the reduce tasks are triggered is chosen in random based on the r= educe slot availability. So if you don't need the reduce tasks to be sc= heduled on some=A0particular=A0nodes you need to set 'mapred.tasktracke= r.reduce.tasks.maximum' on those nodes to 0. The bottleneck here is tha= t this=A0property=A0is not a job level one you need to set it on a cluster = level.

A cleaner approach will be to configure each of your no= des with the right number of map and reduce slots based on the resources=A0= available=A0on each machine.


On Mon, Sep 3, 2012 at 7:49 PM, Abhay Ratnaparkhi <abhay.ratnapa= rkhi@gmail.com> wrote:
Hello,

How can one get to= know the nodes on which reduce tasks will run?

On= e of my job is running and it's completing all the map tasks.
My map tasks write lots of intermediate data. The intermediate directo= ry is getting full on all the nodes.=A0
If the reduce task take any node from cluster then It'll try to co= py the data to same disk and it'll eventually fail due to Disk space re= lated exceptions.

I have added few more tasktracke= r nodes in the cluster and now want to run reducer on new nodes only.
Is it possible to choose a node on which the reducer will run? What= 9;s the algorithm hadoop uses to get a new node to run reducer?
<= br>
Thanks in advance.

Bye
Abhay


--047d7b343958d7df4704c8cde87f--