hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ivan Leonardi <ivanleona...@gmail.com>
Subject Re: long running reduce task was killed due to failed to report status for 602 seconds
Date Thu, 27 Jan 2011 14:15:06 GMT
I ha the same problem! Try to build a little testing suite to actually
see how much time is required by your algorithm. I discovered that the
mine was taking 18 minutes!
Actually, I guess that your problem lies in your comment "massive work".

Ivan

2011/1/27 Anfernee Xu <anfernee.xu@gmail.com>:
> This question has been asked before, but I tried suggested solutions such as
> call Context.setStatus() or progress(), neither them helped. Please advise.
> My reduce task is doing some CPU extensive work in reduce task, below is my
> code snippet
> @Override
>   protected void reduce(Text inpput, Iterable<LongWritable> docsIDs,
>       Context context) throws IOException, InterruptedException {
>       // really quick operation
>   }
>  @Override
>   protected void cleanup(Context context) throws IOException,
>       InterruptedException {
>     // massive work here
>    for(....){
>        //doing one iteration
>       context.setStatus("Iteration #"+i);
>       context.progress();
>    }
> }
> --
> --Anfernee
>

Mime
View raw message