hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Killing hadoop jobs automatically
Date Mon, 30 Jan 2012 07:20:14 GMT
In the current stables, this is available at the task level with a
default fo 10m of non-responsiveness per task. Controlled per-job via

There is no built-in feature that lets you monitor and set a timeout
on the job execution itself, however (but should be easy to do) -- How
do you imagine this being useful vs. per-task timeouts that help
unsticking jobs or failing them eventually if they are improperly
written (which causes them to hang and not report any status for the
timeout period)?

On Mon, Jan 30, 2012 at 12:36 PM, praveenesh kumar <praveenesh@gmail.com> wrote:
> Is there anyway through which we can kill hadoop jobs that are taking
> enough time to execute ?
> What I want to achieve is - If some job is running more than
> "_some_predefined_timeout_limit", it should be killed automatically.
> Is it possible to achieve this, through shell scripts or any other way ?
> Thanks,
> Praveenesh

Harsh J
Customer Ops. Engineer, Cloudera

View raw message