hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raymond Jennings III <raymondj...@yahoo.com>
Subject RE: why does 'jps' lose track of hadoop processes ?
Date Mon, 29 Mar 2010 17:05:35 GMT
That would explain why the processes cannot be stopped but the mystery of why jps loses track
of these active processes still remains.  Even when jps does not report any hadoop process
I can still submit and run jobs just fine.  I will have to check the next time it happens
if the the hadoop pid's are the same as what is in the file.  If different that would somehow
mean the hadoop process was being restarted?

--- On Mon, 3/29/10, Bill Habermaas <bill@habermaas.us> wrote:

> From: Bill Habermaas <bill@habermaas.us>
> Subject: RE: why does 'jps' lose track of hadoop processes ?
> To: common-user@hadoop.apache.org
> Date: Monday, March 29, 2010, 11:44 AM
> Sounds like your pid files are
> getting cleaned out of whatever directory
> they are being written (maybe garbage collection on a temp
> directory?). 
> 
> Look at (taken from hadoop-env.sh):
> # The directory where pid files are stored. /tmp by
> default.
> # export HADOOP_PID_DIR=/var/hadoop/pids
> 
> The hadoop shell scripts look in the directory that is
> defined.
> 
> Bill
> 
> -----Original Message-----
> From: Raymond Jennings III [mailto:raymondjiii@yahoo.com]
> 
> Sent: Monday, March 29, 2010 11:37 AM
> To: common-user@hadoop.apache.org
> Subject: why does 'jps' lose track of hadoop processes ?
> 
> After running hadoop for some period of time, the command
> 'jps' fails to
> report any hadoop process on any node in the cluster. 
> The processes are
> still running as can be seen with 'ps -ef|grep java'
> 
> In addition, scripts like stop-dfs.sh and stop-mapred.sh no
> longer find the
> processes to stop.
> 
> 
>       
> 
> 
> 


      

Mime
View raw message