hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eddie C" <edlinuxg...@gmail.com>
Subject Re: Fault Tolerance: Inquiry for approaches to solve single point of failure when name node fails
Date Thu, 13 Mar 2008 21:21:31 GMT
According to the documentation you can instruct the name node to write
data to multiple places in the configuration file.

I would think writing the data to two separate directly attached disk
arrays. Attach two servers to both arrays have the second server as a
cold or hot spare. Using some type of clustering software linux-ha
that is how I would handle it.


On Thu, Mar 13, 2008 at 5:05 PM, Cagdas Gerede <cagdas.gerede@gmail.com> wrote:
> > If your data center fails, then you probably have to worry more about how to get
your data.
>
>  I assume having multiple data centers. I know thanks to HDFS
>  replication data in the other data center will be enough.
>  However, as much as I see for now, HDFS has no support for replication
>  of namenode.
>  Is this true?
>  If there is no automated support, and If I need to do this replication
>  with some custom code or manual intervention,
>  what are the steps to do this replication?
>
>  Any help is appreciated.
>
>  Cagdas
>

Mime
View raw message