hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aleksandr Elbakyan <ramal...@yahoo.com>
Subject Re: Kill Task Programmatically
Date Wed, 03 Aug 2011 23:40:11 GMT

You can just throw run time exception. In that case it will fail :)


--- On Wed, 8/3/11, Adam Shook <ashook@clearedgeit.com> wrote:

From: Adam Shook <ashook@clearedgeit.com>
Subject: Kill Task Programmatically
To: "common-user@hadoop.apache.org" <common-user@hadoop.apache.org>
Date: Wednesday, August 3, 2011, 3:33 PM

Is there any way I can programmatically kill or fail a task, preferably from inside a Mapper
or Reducer?

At any time during a map or reduce task, I have a use case where I know it won't succeed based
solely on the machine it is running on.  It is rare, but I would prefer to kill the task
and have Hadoop start it up on a different machine as usual instead of waiting for the 10
minute default timeout.

I suppose the speculative execution could take care of it, but I would rather not rely on
it if I am able to kill it myself.


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message