hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthew Foley <ma...@yahoo-inc.com>
Subject Re: Cluster hard drive ratios
Date Fri, 06 May 2011 17:15:52 GMT
Ah, so you're suggesting there should be some hysteresis in the system, delaying response for
a while to large-scale events?

In particular, are you suggesting that for anticipated events, like 
	"I'm taking this rack offline for 30 minutes, 
	but it will be back with data intact, AND 
	I don't care if a bunch of blocks are unavailable and/or 
	at-risk-because-of-under-replication during that time", 
we should be able to tell the Namenode to tolerate that condition for a while?

I can think of a number of problems (in the DWIM class), but they should be compared with
the pragmatic benefits.  Is there a Jira open for it yet?  If not, why don't you open one,
and float a problem statement and user story? Let's see if there is significant interest from
the Service Engineering branch of the user community.

--Matt


On May 6, 2011, at 5:33 AM, Steve Loughran wrote:

On 05/05/11 19:14, Matthew Foley wrote:
> "a node (or rack) is going down, don't replicate" == DataNode Decommissioning.
> 
> This feature is available.  The current usage is to add the hosts to be decommissioned
to the exclusion file named in dfs.hosts.exclude, then use DFSAdmin to invoke "-refreshNodes".
 (Search for "decommission" in DFSAdmin source code.)  NN will stop using these servers as
replication targets, and will re-replicate all their replicas to other hosts that are still
in service.  The count of nodes that are in the process of being decommissioned is reported
in the NN status web page.
> 

I'm thinking more of "don't overreact to 50 machines going offline by 
rebalancing -all copies whose replication count has just dropped by 1, 
not until the rack has been offline for >30 minutes."


Mime
View raw message