hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: how can I decommission nodes on-the-fly?
Date Wed, 26 Nov 2008 11:40:27 GMT
lohit wrote:
> As Amareshwari said, you can almost safely stop TaskTracker process on node. Task(s)
running on that would be considered failed and would be re-executed by JobTracker on another
node. Reason why we decomission DataNode is to protect against data loss. DataNode stores
HDFS blocks, by decomissioning you would be asking NameNode to copy over the block is has
to some other datanode. 
> 
> Thanks,
> Lohit
> 

At some point in the future, I could imagine it being handy to have the 
ability to decomission a task tracker, which would tell it to stop 
accepting new work, and run the rest down. This would be good when tasks 
take time to run but you still want to be agile in your cluster management.

Mime
View raw message