hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammad Tariq <donta...@gmail.com>
Subject Re: Stopping a single Datanode
Date Thu, 16 Aug 2012 21:07:08 GMT
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq



On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <thealy@bnl.gov> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <thealy@bnl.gov> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>

Mime
View raw message