hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juwei Shi <shiju...@gmail.com>
Subject Re: Jobs are still in running state after executing "hadoop job -kill jobId"
Date Fri, 01 Jul 2011 17:17:10 GMT
Thanks Harsh.

There are "recover" jobs after I re-boot mapreduce/hdfs.

Is there any other way to delete the status records of the running jobs?
Then they will not recover after restarting JT?

2011/7/2 Harsh J <harsh@cloudera.com>

> Juwei,
>
> Please do not cross-post to multiple lists. I believe this question
> suits the mapreduce-user@ list so am replying only on there.
>
> On Fri, Jul 1, 2011 at 9:22 PM, Juwei Shi <shijuwei@gmail.com> wrote:
> > Hi,
> >
> > I faced a problem that the jobs are still running after executing "hadoop
> > job -kill jobId". I rebooted the cluster but the job still can not be
> > killed.
>
> What does the JT logs say after you attempt to kill a job ID? Does the
> same Job ID keep running even after or are you seeing other jobs
> continue to launch?
>
> --
> Harsh J
>

-- 
- Juwei

Mime
View raw message