hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Jobs are still in running state after executing "hadoop job -kill jobId"
Date Fri, 01 Jul 2011 17:29:56 GMT
Juwei,

Its odd that a killed job should get "recovered" back into running
state. Can you not simply disable the JT recovery feature (I believe
its turned off by default)?

On Fri, Jul 1, 2011 at 10:47 PM, Juwei Shi <shijuwei@gmail.com> wrote:
> Thanks Harsh.
>
> There are "recover" jobs after I re-boot mapreduce/hdfs.
>
> Is there any other way to delete the status records of the running jobs?
> Then they will not recover after restarting JT?
>
> 2011/7/2 Harsh J <harsh@cloudera.com>
>>
>> Juwei,
>>
>> Please do not cross-post to multiple lists. I believe this question
>> suits the mapreduce-user@ list so am replying only on there.
>>
>> On Fri, Jul 1, 2011 at 9:22 PM, Juwei Shi <shijuwei@gmail.com> wrote:
>> > Hi,
>> >
>> > I faced a problem that the jobs are still running after executing
>> > "hadoop
>> > job -kill jobId". I rebooted the cluster but the job still can not be
>> > killed.
>>
>> What does the JT logs say after you attempt to kill a job ID? Does the
>> same Job ID keep running even after or are you seeing other jobs
>> continue to launch?
>>
>> --
>> Harsh J
>
> --
> - Juwei
>



-- 
Harsh J

Mime
View raw message