hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amareshwari Sriramadasu <amar...@yahoo-inc.com>
Subject Re: how can I decommission nodes on-the-fly?
Date Wed, 26 Nov 2008 07:51:21 GMT
Jeremy Chow wrote:
> Hi list,
>
>  I added a property dfs.hosts.exclude to my conf/hadoop-site.xml. Then
> refreshed my cluster with command
>                      bin/hadoop dfsadmin -refreshNodes
> It showed that it can only shut down the DataNode process but not included
> the TaskTracker process on each slaver specified in the excludes file.
>   
Presently, decommissioning TaskTracker on-the-fly is not available.
> The jobtracker web still show that I hadnot shut down these nodes.
> How can i totally decommission these slaver nodes on-the-fly? Is it can be
> achieved only by operation on the master node?
>
>   
I think one way to shutdown a TaskTracker is to kill it.

Thanks
Amareshwari
> Thanks,
> Jeremy
>
>   


Mime
View raw message