Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 84B4DE9A8 for ; Sat, 9 Feb 2013 04:25:00 +0000 (UTC) Received: (qmail 83299 invoked by uid 500); 9 Feb 2013 04:24:55 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 83199 invoked by uid 500); 9 Feb 2013 04:24:55 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 83169 invoked by uid 99); 9 Feb 2013 04:24:53 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Feb 2013 04:24:53 +0000 X-ASF-Spam-Status: No, hits=3.5 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_PASS,UNPARSEABLE_RELAY X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: local policy) Received: from [98.136.217.110] (HELO nm39-vm7.bullet.mail.gq1.yahoo.com) (98.136.217.110) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Feb 2013 04:24:44 +0000 Received: from [98.137.12.62] by nm39.bullet.mail.gq1.yahoo.com with NNFMP; 09 Feb 2013 04:24:24 -0000 Received: from [208.71.42.209] by tm7.bullet.mail.gq1.yahoo.com with NNFMP; 09 Feb 2013 04:24:24 -0000 Received: from [127.0.0.1] by smtp220.mail.gq1.yahoo.com with NNFMP; 09 Feb 2013 04:24:24 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024; t=1360383864; bh=TDGVBjoaP1pgo2UBQceSoup2qlQi7FlwdRn/E4NbEPA=; h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:Received:From:To:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:X-Mailer:thread-index:Content-Language; b=pjbz03r0yxlLi9O9WWwWl91JFalnL/sLWqmRNgDI0sD5tx3RDlDvLsfY9PH4lRL/0S5vQ118iUCB32rB2tpX6R866L3JUbqF2+Ded9eyEj1glm/jkFl2y2tsWYfcWwmiqbqrq45hUNCbSrM2JXlPVAllWxxLZE/6BO2EGyNMZg4= X-Yahoo-Newman-Id: 39439.7304.bm@smtp220.mail.gq1.yahoo.com X-Yahoo-Newman-Property: ymail-3 X-YMail-OSG: nEYRCAoVM1lP_U5L.oeBuYEAxgNJZNwZB1oVm7J1PgvdO3b whDZDG78ZtNJcLeSOp4FZsgfqFDdKOP8Rky8uo7Ybk_Y9QItxaJPNvLigRWi 1_Bx7Qpy2E1TZXqD4_biIbACIx6GgIGGKGFsbtVteSsPJnksb_TgLWOU.lCJ zLUUveB4WdqK4AvdJ43z9WKXoJt48hg6dgNGlRYRqHmmG3.RGjkGWgiUY_io Gfgwea.pDlO00dcKehSPEOpLD0zR0F0Wpz5W6D1dlBPkfAw7dN1.XZJF9iZZ 1blzEKlOp145fFrQ9VTyzNc4Rjvufkcp0VqdB3b6so7XdIOaWZOROzRYEdXR RjQLtinksyjb9fl03sAJh.ZhO1PZuL.dBXm164OLVozst82B1pOG3q8UuClK uuMFo.5mLrcrbZ2yKb2u_EA4NDNbC1IOsFlJtt8M5LG7nkSpYuN.gVsKRwlS IuyUCG8GY X-Yahoo-SMTP: k2gD1GeswBAV_JFpZm8dmpTCwr4ufTKOyA-- Received: from sattelite (davidparks21@1.54.13.8 with login) by smtp220.mail.gq1.yahoo.com with SMTP; 08 Feb 2013 20:24:23 -0800 PST From: "David Parks" To: References: <088601ce0679$2e56c110$8b044330$@yahoo.com> In-Reply-To: Subject: RE: How can I limit reducers to one-per-node? Date: Sat, 9 Feb 2013 11:24:20 +0700 Message-ID: <089b01ce067d$55fcc2b0$01f64810$@yahoo.com> MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_089C_01CE06B8.025C8510" X-Mailer: Microsoft Outlook 14.0 thread-index: AQJTcGUtpoQ37VYnZUxvv59NqhcA+gGFeZ1Ml1nsQqA= Content-Language: en-us X-Virus-Checked: Checked by ClamAV on apache.org This is a multipart message in MIME format. ------=_NextPart_000_089C_01CE06B8.025C8510 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hmm, odd, I=E2=80=99m using AWS Mapreduce, and this property is already = set to 1 on my cluster by default (using 15 m1.xlarge boxes which come = with 3 reducer slots configured by default). =20 =20 =20 From: Nan Zhu [mailto:zhunansjtu@gmail.com]=20 Sent: Saturday, February 09, 2013 10:59 AM To: user@hadoop.apache.org Subject: Re: How can I limit reducers to one-per-node? =20 I think set tasktracker.reduce.tasks.maximum to be 1 may meet your = requirement =20 =20 Best, =20 --=20 Nan Zhu School of Computer Science, McGill University =20 =20 On Friday, 8 February, 2013 at 10:54 PM, David Parks wrote: I have a cluster of boxes with 3 reducers per node. I want to limit a = particular job to only run 1 reducer per node. =20 This job is network IO bound, gathering images from a set of webservers. =20 My job has certain parameters set to meet =E2=80=9Cweb = politeness=E2=80=9D standards (e.g. limit connects and connection = frequency). =20 If this job runs from multiple reducers on the same node, those per-host = limits will be violated. Also, this is a shared environment and I = don=E2=80=99t want long running network bound jobs uselessly taking up = all reduce slots. =20 ------=_NextPart_000_089C_01CE06B8.025C8510 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Hmm, odd, I=E2=80=99m using AWS Mapreduce, and this property is = already set to 1 on my cluster by default (using 15 m1.xlarge boxes = which come with 3 reducer slots configured by = default).

 

 

 

From:= = Nan Zhu [mailto:zhunansjtu@gmail.com]
Sent: Saturday, = February 09, 2013 10:59 AM
To: = user@hadoop.apache.org
Subject: Re: How can I limit reducers = to one-per-node?

 

I think = set tasktracker.reduce.tasks.maximum  to be 1 may meet your = requirement

 

 

Best,

 

-- 

Nan Zhu

School of Computer = Science,

McGill = University

 

 

On Friday, 8 February, 2013 at 10:54 PM, David = Parks wrote:

I have a cluster of = boxes with 3 reducers per node. I want to limit a particular job to only = run 1 reducer per node.

 

This job is network IO bound, = gathering images from a set of webservers.

 

My job has certain parameters = set to meet =E2=80=9Cweb politeness=E2=80=9D standards (e.g. limit = connects and connection frequency).

 

If this job runs from = multiple reducers on the same node, those per-host limits will be = violated.  Also, this is a shared environment and I don=E2=80=99t = want long running network bound jobs uselessly taking up all reduce = slots.

 

------=_NextPart_000_089C_01CE06B8.025C8510--