hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From abhishek sharma <absha...@usc.edu>
Subject Re: stop scripts not working properly
Date Wed, 14 Apr 2010 06:15:04 GMT
Hi Todd,

I am using the tarball.

Let me try configuring the pid files to stored somewhere else.

Thanks for the tip,
Abhishek

On Tue, Apr 13, 2010 at 11:10 PM, Todd Lipcon <todd@cloudera.com> wrote:
> Hi Abhishek,
>
> Are you using the tarball or the RPMs/debs? The issue is most likely that
> your pid files are ending up in /tmp and thus getting cleaned out
> periodically.
>
> -Todd
>
> On Tue, Apr 13, 2010 at 11:07 PM, abhishek sharma <absharma@gmail.com>wrote:
>
>> Hi all,
>>
>> I am using the Cloudera Hadoop distribution version 0.20.2+228.
>>
>> I have a small 9 node cluster and when I try to stop the Hadoop DFS
>> and Mapred using
>> the stop-mapred.sh and stop-dfs.sh scripts, it downs shutdown some of
>> the TaskTrackers and DataNodes. I get a message saying no tasktracker
>> or datanode to stop, but when I log into the machines, I can see the
>> TaskTracker and DataNode processes running (for e.g. using jps).
>>
>> I did not notice anything unusal in the log files. I am not sure what
>> might be the problem but when I use Hadoop version 0.20.0, the scripts
>> work fine.
>>
>> Any idea what time be happening?
>>
>> Thanks,
>> Abhishek
>>
>
>
>
> --
> Todd Lipcon
> Software Engineer, Cloudera
>

Mime
View raw message