hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: decommissioning node woes
Date Fri, 18 Mar 2011 17:57:06 GMT
On 18/03/11 17:34, Ted Dunning wrote:
> I like to keep that rather high.  If I am decommissioning nodes, I generally
> want them out of the cluster NOW.

Depends on your backbone B/W I guess. And how well the switches really 
work vs claim to work.

One thought here, does the decommissioning give priority to blocks that 
are only replicated on the machine(s) being decommissioned. If not, it's 
something to consider prioritising.

> That is probably a personality defect on my part.
> On Fri, Mar 18, 2011 at 9:59 AM, Michael Segel<michael_segel@hotmail.com>wrote:
>> Once you see those blocks successfully replicated... you can take down the
>> next.
>> Is it clean? No, not really.
>> Is it dangerous? No, not really.
>> Do I recommend it? No, but its a quick and dirty way of doing things...
>> Or you can up your dfs.balance.bandwidthPerSecIn the configuration files.
>> The default is pretty low.
>> The downside is that you have to bounce the cloud to get this value
>> updated, and it could have a negative impact on performance if set too high.

View raw message