hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@yahoo-inc.com>
Subject Re: Task was killed due to running over 600 sec
Date Tue, 29 Jan 2008 15:03:30 GMT

On Jan 28, 2008, at 11:12 PM, ChaoChun Liang wrote:

>
>
> lohit.vijayarenu wrote:
>>
>> You could try setting the value of mapred.task.timeout to higher  
>> value.
>> Thanks,
>> Lohit
>>
>
> Could I set the different timeout values for the maper and reducer
> separately?
> In my case, the execution time for the mapper is short than the  
> reducer.
>

No. There isn't a way to do that.

However, it really _is_ better to send progress/status-updates to the  
TaskTracker rather than to work around it... infact it is as simple  
as calling *reporter.progress()* periodically, or reporter.setStatus  
on the reporter which is passed to the map/reduce method. It helps  
debugging too...

Arun
> Thanks.
> ChaoChun
>
> -- 
> View this message in context: http://www.nabble.com/Task-was-killed- 
> due-to-running-over-600-sec-tp15148129p15153682.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>


Mime
View raw message