hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Karthik Kumar <karthik84ku...@gmail.com>
Subject Re: Task tracker and Data node not stopping
Date Tue, 20 Jul 2010 03:03:43 GMT
Hi Ken,

        Thank you for your quick reply. I dont know how to find the process
which is overwriting those files. Anyhow i re-installed Cygwin from the
scratch and the problem is solved.

On Thu, Jul 15, 2010 at 9:49 PM, Ken Goodhope <kengoodhope@gmail.com> wrote:

> Inside hadoop-env.sh, you will see a property that sets the directory for
> pids to be written too.  Check which directory it is and then investigate
> the possibility that some other process is deleting, or overwriting those
> files.  If you are using NFS, with all nodes pointing at the same
> directory,
> then it might be a matter of each node overwriting the same file.
>
> Either way, the stop scripts look for those pid files, and used them to
> stop
> the correct daemon.  If they are not found, or if the file contains the
> wrong pid, the script will echo no process to stop.
>
> On Thu, Jul 15, 2010 at 4:51 AM, Karthik Kumar <karthik84kumar@gmail.com
> >wrote:
>
> > Hi,
> >
> >      I am using a cluster of two machines one master and one slave. When
> i
> > try to stop the cluster using stop-all.sh it is displaying as below. the
> > task tracker and datanode are also not stopped in the slave. Please help
> me
> > in solving this.
> >
> > stopping jobtracker
> > 160.110.150.29: no tasktracker to stop
> > stopping namenode
> > 160.110.150.29: no datanode to stop
> > localhost: stopping secondarynamenode
> >
> >
> > --
> > With Regards,
> > Karthik
> >
>



-- 
With Regards,
Karthik

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message