hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "LiuLei (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5442) Zero loss HDFS data replication for multiple datacenters
Date Tue, 24 Dec 2013 08:40:51 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13856232#comment-13856232
] 

LiuLei commented on HDFS-5442:
------------------------------

Hi Jerry Chen,
Thanks for your your detailed answer.

I have bloew questions:
1. If I want store three replicas in secondary cluster. With synchronous data writing, does
the primary cluster need to maintain NetworkTopology of datanodes in secondary cluster?

2. If  the active namenode of  secondary cluster has not received heartbeat msg from a datanode
for  more than 30s , the datanode will   be marked and treated as "stale" default. These stale
datanodes will be not written data.  When datanodes in secondary cluster is become "stale",
does the active namenode of  secondary cluster send DR_DN_AVAILABLE command to namenodes in
primary cluster? And when the "stale" is become lived, does the active namenode of  secondary
cluster send DR_DN_AVAILABLE command to namenodes in primary cluster?

3. When secondary cluster become primary cluster, can  the client automatically switch to
secondary cluster?

> Zero loss HDFS data replication for multiple datacenters
> --------------------------------------------------------
>
>                 Key: HDFS-5442
>                 URL: https://issues.apache.org/jira/browse/HDFS-5442
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Avik Dey
>         Attachments: Disaster Recovery Solution for Hadoop.pdf, Disaster Recovery Solution
for Hadoop.pdf
>
>
> Hadoop is architected to operate efficiently at scale for normal hardware failures within
a datacenter. Hadoop is not designed today to handle datacenter failures. Although HDFS is
not designed for nor deployed in configurations spanning multiple datacenters, replicating
data from one location to another is common practice for disaster recovery and global service
availability. There are current solutions available for batch replication using data copy/export
tools. However, while providing some backup capability for HDFS data, they do not provide
the capability to recover all your HDFS data from a datacenter failure and be up and running
again with a fully operational Hadoop cluster in another datacenter in a matter of minutes.
For disaster recovery from a datacenter failure, we should provide a fully distributed, zero
data loss, low latency, high throughput and secure HDFS data replication solution for multiple
datacenter setup.
> Design and code for Phase-1 to follow soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message