zookeeper-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <ted.dunn...@gmail.com>
Subject Re: A few questions on zookeeper
Date Tue, 07 Dec 2010 20:38:07 GMT
You can make "quorum disk alike" happen by running two ZK instances on one
of your two machines.

You can also do this dynamically by doing the standard rolling reconfigure
trick.

On Tue, Dec 7, 2010 at 12:32 PM, David Alves <davidralves@gmail.com> wrote:

> Hi Ted and Jared
>
>        Thank you for your replies. Sorry in advance if my questions are too
> absurd.
>        I know that, currently, with two servers the cluster would have a
> higher failure probability (since the ensemble started with three and it
> needs a two nodes majority, if any one fails there is no longer a majority
> and the cluster hangs right?).
>        The purpose of the quorum disk would be to function has a tie
> breaker in even node deployments (namely two), by representing an external,
> storage only, resource where cluster state would be maintained. I know there
> are some paxos-based deployments that use this technique, although I'm not
> sufficiently into the internals of zookeeper to assert whether it would
> work.
>        A question about an alternative... would it be possible to change
> the majority function (dynamically) in two node deployments?
>        Regarding zookeeper-107, that is very interesting as we were
> considering using zookeeper in an "elastic" setting. Is there progress on
> this issue? can I help?
>
> Regards
> David
>
> On Dec 6, 2010, at 5:02 PM, Ted Dunning wrote:
>
> David,
>
> For your second point, take a look at
> https://issues.apache.org/jira/browse/ZOOKEEPER-107.  Unfortunately,
> Zookeeper does not support this feature yet.
>
> ~Jared
>
> > You can definitely do this, but your reliability will be different from
> what
> > might be expected.  The probability of the cluster hanging due to a
> failure
> > is higher than for a single machine.  The probability of data loss will
> be
> > lower than for a single machine.  Similarly, any maintenance that
> requires
> > one of the two machines to be down will cause your cluster to hang.
> >
> > Can you clarify what you mean by 2 server + 1 quorum disk?
> >
> > Btw... one way around the downtime during maintenance is to reconfigure
> the
> > cluster on the fly to use a single server
> > during the maintenance window.  You will still have a short window of
> freeze
> > because you need to take down the
> > server that is exiting the cluster first before bouncing the other server
> > with the new config.  If your maintenance period is less than a few
> minutes,
> > this isn't usually worthwhile.
> >
> > On Mon, Dec 6, 2010 at 7:43 AM, David Alves <davidralves@gmail.com>
> wrote:
> >
> >>       1- Feasibility of a two node cluster:
> >>               Q- I know zookeeper runs over (a approx. version of) paxos
> >> and tolerates f failures in 2f+1 nodes but would it be possible to use
> in a
> >> 2 server + 1 quorum disk deployment (much like windows clusters do)? The
> >> objective would be to use ZK  both for distributed (processing nodes >2)
> and
> >> single processing node (2 nodes active-passive) highly available
> >> deployments.
> >>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message