hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Lior Schachter <li...@infolinks.com>
Subject Re: killed reducers
Date Mon, 28 Mar 2011 20:37:56 GMT
Thanks. I believe this will solve the problem.

Lior

On Mon, Mar 28, 2011 at 6:38 PM, Harsh J <qwertymaniac@gmail.com> wrote:

> Are you looking to disable the speculative execution feature of MR?
>
> You can do so for your job by passing 'false' to
> jobConf.setSpeculativeExecution(..), or fine-control map and reduce
> speculatives individually using jobConf.setMapSpeculativeExecution(..)
> and jobConf.setReduceSpeculativeExecution(..).
>
> On Mon, Mar 28, 2011 at 8:32 PM, Lior Schachter <liors@infolinks.com>
> wrote:
> > Hi,
> > We have a map/reduce task that insert to hbase (in the reduce phase).
> > Our problem is that some reduce jobs finish early and then the framework
> use
> > them in-order to "backup" running reducers (As it should do).
> > This causes to multiple redundant writes to hbase (which is costly).
> These
> > reducers are eventually killed since the original reducers finish their
> job.
> >
> > Can we configure our job to start the "backup" reducer only in case a
> > failure actually occurs (we don't want to compromise  robustness) ?
> >
> > Thanks,
> > Lior
> >
> >
>
>
>
> --
> Harsh J
> http://harshj.com
>

Mime
View raw message