hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hexrat <nab...@senft.net>
Subject Dynamic machines within Hadoop cluster
Date Thu, 20 Sep 2007 04:26:44 GMT

I am looking at Hadoop as a platform for performing some google-like
map/reduce programs.  One thing I am not understanding is how the machines
come into the cluster after processing has begun.  It appears the machines
in the cluster are configured up front and immutable.  Is this so?

My understanding of the google architecture is that if one or more machines
fail, the job scheduler just brings additional machines into the cluster and
assigns them tasks.  How does this occur in Hadoop since the machines must
be specified by config up front.  Am I understanding the architecture
accurately?  Thanks in advance.
View this message in context: http://www.nabble.com/Dynamic-machines-within-Hadoop-cluster-tf4485299.html#a12790659
Sent from the Hadoop Users mailing list archive at Nabble.com.

View raw message