zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Shawn Heisey <apa...@elyograg.org>
Subject Re: ZooKeeper in different datacenters
Date Wed, 22 Aug 2018 16:36:10 GMT
On 8/22/2018 10:02 AM, ilango dhandapani wrote:
> 1. To have disaster recovery, planning to have 2 solr servers on 1st DC and
> other 2 solr servers on 2nd DC. Seems there should not be any issue here.
> Each shard will have 1st node in 1st DC and 2nd node in 2nd DC.

For Solr nodes in a SolrCloud setup, this is fine.  But keep reading, 
because your overall plan isn't going to work.

> 2. Planing to run 3 zk nodes on 1st DC and 3 zk nodes on 2nd DC. Now will
> affect the performance ?

ZooKeeper cannot be made fully fault tolerant with only two 
datacenters.  It's simply not possible.  No matter how you distribute 
the nodes, at least one of your data centers will not have enough nodes 
to achieve quorum.  The way you've described things, NEITHER of the data 
centers will have enough nodes to achieve quorum if the other datacenter 
becomes unreachable. More than half are required. You must have at least 
three datacenters for a distributed fault tolerant ZK setup.  If you put 
4 ZK nodes in DC1 and 3 in DC2, then the loss of DC1 will eliminate quorum.

When a write is made to ZK, it must be written to all running ZK servers 
before the operation returns to the caller. Solr does not write to ZK 
often unless a Solr instance goes down or comes up frequently.  Index 
updates do NOT go through ZK.The ZK database is consulted to discover 
where the replicas are, but the updates themselves are never written to ZK.

> 3. Will this affect the replication between the solr nodes on different DCs?

This mailing list will have no idea about this -- it's for ZK.  I'm part 
of the Solr community though, so you're not completely out of luck.

The only thing that's going to affect Solr replication between data 
centers is the network latency between those data centers.  If that's 
low, replication will be fast.

Thanks,
Shawn


Mime
View raw message