hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Segel <michael_se...@hotmail.com>
Subject RE: decomission a node
Date Tue, 06 Jul 2010 15:35:35 GMT

Alan,

You shouldn't need to worry about moving the data blocks off your node, assuming you're replicating
your blocks 3x times. (I think its the default...)

You can bring down your node, and within 15 mins, Hadoop will recognize that node as down...

I think if you do a $> hadoop fsck / your system will recognize that those blocks are under
replicated and will replicate the blocks on another machine. 

If you're decommissioning ?sp? a node, then you're taking the machine out of the cluster permanently.

I'm also not sure how dropping a node will test the scalability. You would be testing resilience.

HTH

-Mike
 

> Date: Tue, 6 Jul 2010 16:31:58 +0200
> To: common-user@hadoop.apache.org
> From: somebody@squareplanet.de
> Subject: decomission a node
> 
> Hi,
> 
> Is it possible move all the data blocks off a cluster node and then decommision the node?
> 
> I'm asking because,  now that my MR job is working, I'd like see how things scale. I.e.,
>  less processing nodes, amount of data (number & size of files, etc.). I currently
have 8 nodes, 
> and am processing 5GB spread across 2000 files. 
> 
> Alan
 		 	   		  
_________________________________________________________________
Hotmail is redefining busy with tools for the New Busy. Get more from your inbox.
http://www.windowslive.com/campaign/thenewbusy?ocid=PID28326::T:WLMTAGL:ON:WL:en-US:WM_HMP:042010_2
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message