hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <qwertyman...@gmail.com>
Subject Re: Hadoop Shutdown Problems
Date Mon, 30 Aug 2010 07:34:42 GMT
Maybe the user who issues stop-all.sh does not have permissions to
terminate the process of NN (and some other, depending on who/what
started it). Check jps listing after stopping and with some ps/top
checks, switch to the proper user and issue a stop-all again?

You can also issue it a SIGTERM I believe.

On Mon, Aug 30, 2010 at 12:22 PM, vaibhav negi <sssssssenator@gmail.com> wrote:
> Hi ,
>
> I am running hadoop 0.20.2 . with 2 node cluster.
> I executed script stop-all.sh . But still 2 line logs are getting created
> every hour in log directory of name node log directory.
> How to completely shutdown hadoop cluster.
> Below is the 1 line log.
>
>
> 2010-08-29 00:30:00,018 INFO org.apache.hadoop.hdfs.server.
> namenode.FSNamesystem.audit: ugi=root,root,bin,daemon,sys,adm,disk,wheel
> ip=/10.0.8.47        cmd=listStatus  src=/user       dst=null
> perm=null
>
>
> Vaibhav Negi
>



-- 
Harsh J
www.harshj.com

Mime
View raw message