hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yanbo Liang <yanboha...@gmail.com>
Subject Re: are we able to decommission multi nodes at one time?
Date Mon, 01 Apr 2013 11:17:53 GMT
It's alowable to decommission multi nodes at the same time.
Just write the all the hostnames which will be decommissioned  to the
exclude file and run "bin/hadoop dfsadmin -refreshNodes".

However you need to ensure the decommissioned DataNodes are minority of all
the DataNodes in the cluster and the block replica can be guaranteed after
decommission.

For example, default replication level mapred.submit.replication=10.
So if you have less than 10 DataNodes after decommissioned, the decommision
process will hang.


2013/4/1 varun kumar <varun.uid@gmail.com>

> How many nodes do you have and replication factor for it.
>

Mime
View raw message