hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Bounour <jboun...@ddn.com>
Subject Re: data center aware hadoop?
Date Sat, 22 Sep 2012 03:24:00 GMT

Interesting topic but is it really a general use case? The average HDFS
cluster today is lower than 100 nodes , still, it is a lot of stored data.
You would have to synchronize petabytes over high latency networks and I
would assume, you are using HDFS as an archive (meaning you do not replace
the content often). The Social network companies cycles logs in HDFS
because the most recent data is their focus.

HDFS Site protection would have to be async mode for sure (performance)
and dealing with data consistency will have to be handle as well which is
never simple; Could use WAN accelerator, of course it is all doable.

HDFS has already protection (3 replica), disaster recovery is relevant for
the namenode, hopefully you would not lose HDFS content

Enterprise requirements from Ops would look at SAN solution for Datanodes
and replicate the storage array or do a backup of it; if you cannot use
SAN and stuck with DAS, make more copies to have more protection level (be
pragmatic and save $$)

Maybe I am missing the point below, why is it really needed


On 9/21/12 5:09 PM, "Jun Ping Du" <jdu@vmware.com> wrote:

>Hi Sujee,
>   HDFS today doesn't consider too much on data center level reliability
>(although it is supposed to extend to data center layer in topology but
>never honored in replica policemen/balancer/task scheduling policy) and
>performance is part of concern to cross data center (assume cross-dc
>bandwidth is lower than within data center). However, in future, I think
>we should deliver a solution to enable data center level disaster
>recovery even performance is downgrade. My several years experience in
>delivering enterprise software is: it is best to let customer to make
>trade-off decision on performance and reliability, and engineering effort
>is to provide options.
>BTW, HDFS HA is a protection of key nodes from SPOF but not handle the
>whole data center shutdown.
>----- Original Message -----
>From: "Sujee Maniyam" <sujee@sujee.net>
>To: "hdfs-dev" <hdfs-dev@hadoop.apache.org>
>Sent: Tuesday, September 11, 2012 7:29:39 AM
>Subject: data center aware hadoop?
>HI devs
>now that hfds HA is is a reality,  how about HDFS spanning multiple
>data centers?  Are there any discussions / work going on in this area?
>It could be a single cluster spanning multiple data centers or having
>a 'standby cluster' in another data center.
>curious, and thanks for your time!
>Sujee Maniyam

View raw message