Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id D433B76AC for ; Tue, 29 Nov 2011 14:45:43 +0000 (UTC) Received: (qmail 97370 invoked by uid 500); 29 Nov 2011 14:45:40 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 97331 invoked by uid 500); 29 Nov 2011 14:45:40 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 97323 invoked by uid 99); 29 Nov 2011 14:45:40 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 29 Nov 2011 14:45:40 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of bejoy.hadoop@gmail.com designates 209.85.216.176 as permitted sender) Received: from [209.85.216.176] (HELO mail-qy0-f176.google.com) (209.85.216.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 29 Nov 2011 14:45:35 +0000 Received: by qyk9 with SMTP id 9so257679qyk.35 for ; Tue, 29 Nov 2011 06:45:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=oLw+0O46ZRQ3s5PbiC94T359A5lOB9DmViajDR1lITA=; b=kyjDzEeypsNf4TgFo+szVQIDS7tuwu77/88++fr1DfHDM9h7tLBVUfrNFydpaWRW/0 Px0HjLWwcw4yP2lG1Lkjysj+PzJzl+CWY4mIFgH5fVR2/fZIof3x7nVtoDclBFZMKMSG BahiaZaI7F7UVRrW+Z6UJ2CdF/QObocA1e0Uk= MIME-Version: 1.0 Received: by 10.224.34.8 with SMTP id j8mr19662727qad.47.1322577914608; Tue, 29 Nov 2011 06:45:14 -0800 (PST) Received: by 10.229.7.207 with HTTP; Tue, 29 Nov 2011 06:45:14 -0800 (PST) In-Reply-To: References: Date: Tue, 29 Nov 2011 20:15:14 +0530 Message-ID: Subject: Re: Multiple reducers From: Bejoy Ks To: common-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=20cf3074b99cc9bb6f04b2e0aa05 --20cf3074b99cc9bb6f04b2e0aa05 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Hi Hoot You can specify the number of reducers explicitly using -D mapred.reduce.tasks=3Dn. hadoop jar wordcount.jar com.wc.WordCount =96D mapred.reduce.tasks=3Dn /inp= ut /output Currenty your word count is triggering just 1 reducer because the defaukt value of mapred.reduce.tasks woulld be set as 1 in your configuration file Hope it helps !. Regards Bejoy.K.S On Tue, Nov 29, 2011 at 8:03 PM, Hoot Thompson wrote: > I'm trying to prove that my cluster will in fact support multiple reducer= s, > the wordcount example doesn't seem to spawn more that one (1). Is that > correct? Is there a sure fire way to prove my cluster is configured > correctly in terms of launching the maximum (say two per node) number of > mappers and reducers? > > Thanks! > > > --20cf3074b99cc9bb6f04b2e0aa05--