zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Flavio Junqueira <fpjunque...@yahoo.com.INVALID>
Subject Re: cross DC setup - is it Ok for ZK?
Date Tue, 21 Oct 2014 22:35:21 GMT
Hierarchical quorums don't rely on strict majorities. Quorums are formed by taking majorities
from a majority of groups, so if you have 3 groups of 3 servers, you have a quorum with 4
servers, 2 from each of 2 distinct groups. This is a different way of doing quorums in ZooKeeper
if grouping makes sense in your scenario, like when you have multiple colos.

-Flavio 



On Tuesday, October 21, 2014 10:15 PM, Camille Fournier <camille@apache.org> wrote:
 

>
>
>You'll have to ask Flavio because I don't really understand what he's
>saying there, tbh. You have to have (n+1)/2 nodes available and
>communicating to each other to maintain a live quorum (so, in your case,
>you must have 4 nodes available at all times to maintain a live quorum).
>
>
>On Tue, Oct 21, 2014 at 5:08 PM, Denis Samoilov <samoilov@gmail.com> wrote:
>
>> Camille, thank you very much! Very interesting read. I also found a thread
>> where you participated three years ago and found one particular comment a
>> bit confusing *"One quick comment. We do not require majority quorums in
>> ZooKeeper,  and one reason we implemented this feature was exactly to
>> enable more flexibility in deployments with multiple data centers*" (
>>
>> http://mail-archives.apache.org/mod_mbox/zookeeper-user/201109.mbox/%3C0B4CC52A-939E-4896-A269-50DC31E20AA6@yahoo-inc.com%3E
>> )
>> but this potentially contradicts FAQ: *"if the leader is in the non-quorum
>> side of the partition, that side of the partition will recognize that it no
>> longer has a quorum of the ensemble"* (
>> https://cwiki.apache.org/confluence/display/ZOOKEEPER/FailureScenarios).
>>
>> Where is the truth? :)
>>
>> On Tue, Oct 21, 2014 at 12:35 PM, Camille Fournier <camille@apache.org>
>> wrote:
>>
>> > I have a blog post on this topic:
>> >
>> >
>> http://whilefalse.blogspot.com/2012/12/building-global-highly-available.html
>> >
>> > I think you will find it helpful.
>> > The short answer is: the scheme you have proposed will cause the ZK to be
>> > unavailable when you do maintenance on the data center with 4 quorum
>> > members.
>> >
>> > Best,
>> > C
>> >
>> > On Tue, Oct 21, 2014 at 3:03 PM, Denis Samoilov <samoilov@gmail.com>
>> > wrote:
>> >
>> > > hi,
>> > >
>> > > Could you please help to understand the following setup: we have two
>> > > datacenters and want to setup ZK cluster so it will use servers (ZK
>> > servers
>> > > not clients) in both: like 3 ZK servers in DC1 and 4 ZK servers in DC2.
>> > We
>> > > sometime do maintenance in one or other DC. So ZK will completely lose
>> > > replicas in one of the DC for several hours. E.g. if DC2 is under
>> > > maintenance ZK will have only 3 out of 7 nodes and these 3 nodes
>> supposed
>> > > to receive writes.
>> > >
>> > > The questions:
>> > > 1) is it Ok for ZK to have such setup?
>> > > 2) will ZK catch up after losing 4 Servers and getting them back in
>> some
>> > > time? (this will be a majority actually :) )
>> > > 3) what is right number of nodes, is 5 sufficient : 2 +  3?
>> > >
>> > > Latency between DCs is pretty low (DCs are close to each other).
>> > >
>> > >
>> > > Thank you for any advice.
>> > >
>> > > -Denis
>> > >
>> >
>>
>
>
>
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message