hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Keliang Zhao" <kez...@cs.ucsd.edu>
Subject Re: How to add/remove slave nodes on run time
Date Fri, 11 Jul 2008 23:31:05 GMT
May I ask what is the right command to start a datanode on a slave?

I used a simple one "bin/hadoop datanode &", but I am not sure.

Also. Should I start the tasktracker manually as well?

-Kevin


On Fri, Jul 11, 2008 at 3:56 PM, lohit <lohit_bv@yahoo.com> wrote:
> To add new datanodes, use the same hadoop version already running on your cluster, the
right config and start datanode on any node. The datanode would be configured to talk to the
namenode by reading the configs and it would join the cluster. To remove datanode(s) you could
decommission the datanode and once decommissioned just kill DataNode process. This is described
in there http://wiki.apache.org/hadoop/FAQ#17
>
> Thanks,
> Lohit
>
> ----- Original Message ----
> From: Kevin <klzhao@gmail.com>
> To: core-user@hadoop.apache.org
> Sent: Friday, July 11, 2008 3:43:41 PM
> Subject: How to add/remove slave nodes on run time
>
> Hi,
>
> I searched a bit but could not find the answer. What is the right way
> to add (and remove) new slave nodes on run time? Thank you.
>
> -Kevin
>
>

Mime
View raw message