Return-Path: Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: (qmail 62140 invoked from network); 28 Mar 2011 20:38:29 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 28 Mar 2011 20:38:29 -0000 Received: (qmail 52279 invoked by uid 500); 28 Mar 2011 20:38:28 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 52227 invoked by uid 500); 28 Mar 2011 20:38:28 -0000 Mailing-List: contact mapreduce-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: mapreduce-user@hadoop.apache.org Delivered-To: mailing list mapreduce-user@hadoop.apache.org Received: (qmail 52219 invoked by uid 99); 28 Mar 2011 20:38:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Mar 2011 20:38:28 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.212.48] (HELO mail-vw0-f48.google.com) (209.85.212.48) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 28 Mar 2011 20:38:22 +0000 Received: by vws7 with SMTP id 7so4037961vws.35 for ; Mon, 28 Mar 2011 13:38:02 -0700 (PDT) MIME-Version: 1.0 Received: by 10.52.69.108 with SMTP id d12mr5803198vdu.259.1301344676061; Mon, 28 Mar 2011 13:37:56 -0700 (PDT) Received: by 10.52.158.70 with HTTP; Mon, 28 Mar 2011 13:37:56 -0700 (PDT) In-Reply-To: References: Date: Mon, 28 Mar 2011 22:37:56 +0200 Message-ID: Subject: Re: killed reducers From: Lior Schachter To: mapreduce-user@hadoop.apache.org Cc: Harsh J Content-Type: multipart/alternative; boundary=20cf307d03e225a4f1049f90eb83 X-Virus-Checked: Checked by ClamAV on apache.org --20cf307d03e225a4f1049f90eb83 Content-Type: text/plain; charset=ISO-8859-1 Thanks. I believe this will solve the problem. Lior On Mon, Mar 28, 2011 at 6:38 PM, Harsh J wrote: > Are you looking to disable the speculative execution feature of MR? > > You can do so for your job by passing 'false' to > jobConf.setSpeculativeExecution(..), or fine-control map and reduce > speculatives individually using jobConf.setMapSpeculativeExecution(..) > and jobConf.setReduceSpeculativeExecution(..). > > On Mon, Mar 28, 2011 at 8:32 PM, Lior Schachter > wrote: > > Hi, > > We have a map/reduce task that insert to hbase (in the reduce phase). > > Our problem is that some reduce jobs finish early and then the framework > use > > them in-order to "backup" running reducers (As it should do). > > This causes to multiple redundant writes to hbase (which is costly). > These > > reducers are eventually killed since the original reducers finish their > job. > > > > Can we configure our job to start the "backup" reducer only in case a > > failure actually occurs (we don't want to compromise robustness) ? > > > > Thanks, > > Lior > > > > > > > > -- > Harsh J > http://harshj.com > --20cf307d03e225a4f1049f90eb83 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Thanks. I believe this will solve the problem.

Lior=

On Mon, Mar 28, 2011 at 6:38 PM, Harsh J= <qwertymani= ac@gmail.com> wrote:
Are you looking to disable the speculative = execution feature of MR?

You can do so for your job by passing 'false' to
jobConf.setSpeculativeExecution(..), or fine-control map and reduce
speculatives individually using jobConf.setMapSpeculativeExecution(..)
and jobConf.setReduceSpeculativeExecution(..).

On Mon, Mar 28, 2011 at 8:32 PM, Lior Schachter <liors@infolinks.com> wrote:
> Hi,
> We have a map/reduce task that insert to hbase (in the reduce phase).<= br> > Our problem is that some reduce jobs finish early and then the framewo= rk use
> them in-order to "backup" running reducers (As it should do)= .
> This causes to multiple redundant writes to hbase (which is costly). T= hese
> reducers are eventually killed since the original reducers finish thei= r job.
>
> Can we configure our job to start the "backup" reducer only = in case a
> failure actually occurs (we don't want to compromise=A0 robustness= ) ?
>
> Thanks,
> Lior
>
>



--
Harsh J
http://harshj.com

--20cf307d03e225a4f1049f90eb83--