hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vlad Kudelin <vkude...@yahoo-inc.com>
Subject Re: Pipes task being killed
Date Wed, 05 Mar 2008 19:17:50 GMT
There is or was an option you could try to play with, smth. like
-jobconf mapred.task.timeout=600000
The number is in milliseconds.

// I actually needed to use it to *decrease* the default timeout which I 
believe is quite big; at times things hang, and this is a possible 
workaround to get a hung one killed by the f/w.

PS: I totally am with Richard's know-how: hadoop doesn't care whether 
your app has finished (and how it's finished); all what matters is that 
stdin is consumed...

Vlad.


Richard Kasperski wrote:
> I think you just need to write to stderr. My understanding is that 
> hadoop is happy as long as input is being consumed, output is being 
> generated or status is being generated.
>
> Rahul Sood wrote:
>> Hi,
>>
>> We have a Pipes C++ application where the Reduce task does a lot of
>> computation. After some time the task gets killed by the Hadoop
>> framework. The job output shows the following error:
>>
>> Task task_200803051654_0001_r_000000_0 failed to report status for 604
>> seconds. Killing!
>>
>> Is there any way to send a heartbeat to the TaskTracker from a Pipes
>> application. I believe this is possible in Java using
>> org.apache.hadoop.util.Progress and we're looking for something
>> equivalent in the C++ Pipes API.
>>
>> -Rahul Sood
>> rsood@yahoo-inc.com
>>
>>
>>   
>


Mime
View raw message