hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anfernee Xu <anfernee...@gmail.com>
Subject Re: long running reduce task was killed due to failed to report status for 602 seconds
Date Fri, 28 Jan 2011 00:22:17 GMT
I also tried forked a new Thread which periodically calls setStatus() and
progress() in my reduce task, but it did not help.

BTW, I'm using Hadoop 0.21.0, is it a bug here?

Thanks

Anfernee

On Thu, Jan 27, 2011 at 10:15 PM, Ivan Leonardi <ivanleonardi@gmail.com>wrote:

> I ha the same problem! Try to build a little testing suite to actually
> see how much time is required by your algorithm. I discovered that the
> mine was taking 18 minutes!
> Actually, I guess that your problem lies in your comment "massive work".
>
> Ivan
>
> 2011/1/27 Anfernee Xu <anfernee.xu@gmail.com>:
> > This question has been asked before, but I tried suggested solutions such
> as
> > call Context.setStatus() or progress(), neither them helped. Please
> advise.
> > My reduce task is doing some CPU extensive work in reduce task, below is
> my
> > code snippet
> > @Override
> >   protected void reduce(Text inpput, Iterable<LongWritable> docsIDs,
> >       Context context) throws IOException, InterruptedException {
> >       // really quick operation
> >   }
> >  @Override
> >   protected void cleanup(Context context) throws IOException,
> >       InterruptedException {
> >     // massive work here
> >    for(....){
> >        //doing one iteration
> >       context.setStatus("Iteration #"+i);
> >       context.progress();
> >    }
> > }
> > --
> > --Anfernee
> >
>



-- 
--Anfernee

Mime
View raw message