hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@maprtech.com>
Subject Re: decommissioning node woes
Date Fri, 18 Mar 2011 17:34:41 GMT
I like to keep that rather high.  If I am decommissioning nodes, I generally
want them out of the cluster NOW.

That is probably a personality defect on my part.

On Fri, Mar 18, 2011 at 9:59 AM, Michael Segel <michael_segel@hotmail.com>wrote:

> Once you see those blocks successfully replicated... you can take down the
> next.
> Is it clean? No, not really.
> Is it dangerous? No, not really.
> Do I recommend it? No, but its a quick and dirty way of doing things...
> Or you can up your dfs.balance.bandwidthPerSecIn the configuration files.
> The default is pretty low.
> The downside is that you have to bounce the cloud to get this value
> updated, and it could have a negative impact on performance if set too high.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message