hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Juwei Shi <shiju...@gmail.com>
Subject Re: Jobs are still in running state after executing "hadoop job -kill jobId"
Date Tue, 05 Jul 2011 15:48:24 GMT
I am sorry that I cc to the common-user again. Please reply this mail to
avoid including common-user list agian.

2011/7/5 Juwei Shi <shijuwei@gmail.com>

> We sometimes have hundreds of map or reduce tasks for a job. I think it is
> hard to find all of them and kill the corresponding jvm processes. If we do
> not want to restart hadoop, is there any automatic methods?
>
> 2011/7/5 <Jeff.Schmitz@shell.com>
>
> Um kill  -9 "pid" ?
>>
>> -----Original Message-----
>> From: Juwei Shi [mailto:shijuwei@gmail.com]
>> Sent: Friday, July 01, 2011 10:53 AM
>> To: common-user@hadoop.apache.org; mapreduce-user@hadoop.apache.org
>> Subject: Jobs are still in running state after executing "hadoop job
>> -kill jobId"
>>
>> Hi,
>>
>> I faced a problem that the jobs are still running after executing
>> "hadoop
>> job -kill jobId". I rebooted the cluster but the job still can not be
>> killed.
>>
>> The hadoop version is 0.20.2.
>>
>> Any idea?
>>
>> Thanks in advance!
>>
>> --
>> - Juwei
>>
>>
>
>
>
>

Mime
View raw message