zookeeper-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Edward Ribeiro (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (ZOOKEEPER-261) Reinitialized servers should not participate in leader election
Date Thu, 12 Jan 2017 03:42:17 GMT

    [ https://issues.apache.org/jira/browse/ZOOKEEPER-261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15820066#comment-15820066

Edward Ribeiro commented on ZOOKEEPER-261:

I wrote the comment below on GH, but for whatever reason it was not posted here, so I am duplicating
just to see where/if I am mistaken. :)

"Hi @enixon,

I think your approach is very cool, for real. I only had time to give a first pass on your
patch now (hope to look closer soon, esp. the tests), but I would like to ask a dumb question.

What if we change the approach and, instead of the initialize file being used for normal execution,
we use a recover (or rejoin) file whose presence denote an exceptional restart of a ZK node?
That way, if and only if, this file is present we delete it and return -1L so that it cannot
take part in the elections until it catches up with the ensemble, etc.

If this file is not present then we proceed as usual (i.e. returns 0L). This way, we are dealing
with the exceptional case by using the initialize/recover. For example: node C (from a 3 node
ensemble) crashes due to disk full exceptions. Then the operator delete the data/ directory
and put the recovering file there.

In my humble (and naive) option, it would avoid some headaches for ops people who would forget
to include the initialize file in a node or two, during rolling upgrades or other cases I
can't think of right now. The presence of this file for normal execution changes the ordinal
operation of a ZK node. So, we don't have to deal with changing the standard way of starting
a ZK node. The recover file is for exceptional cases, where we want to make sure the restarting
node cannot take part in an election.

PS: I didn't get the autocreateDB stuff also. But it's late at night here. 😄


/cc [~hanm] [~breed] [~fpj]

PS2: The scenario described in the JIRA is a good point in favor of a {{initialize}} file,
because when B & C came back **automatically** then the {{initialize}} file would be missing
from both nodes, and the ensemble would grind to a halt because no one would be leader, right?
Otherwise, if there was an script to **automatically* create those files on each node  once
the machine was turned up then B & C would have the file created and then we could come
back to square one, right? Does it make any sense what I am writing? Please, lecture me. :)

> Reinitialized servers should not participate in leader election
> ---------------------------------------------------------------
>                 Key: ZOOKEEPER-261
>                 URL: https://issues.apache.org/jira/browse/ZOOKEEPER-261
>             Project: ZooKeeper
>          Issue Type: Improvement
>          Components: leaderElection, quorum
>            Reporter: Benjamin Reed
> A server that has lost its data should not participate in leader election until it has
resynced with a leader. Our leader election algorithm and NEW_LEADER commit assumes that the
followers voting on a leader have not lost any of their data. We should have a flag in the
data directory saying whether or not the data is preserved so that the the flag will be cleared
if the data is ever cleared.
> Here is the problematic scenario: you have have ensemble of machines A, B, and C. C is
down. the last transaction seen by C is z. a transaction, z+1, is committed on A and B. Now
there is a power outage. B's data gets reinitialized. when power comes back up, B and C comes
up, but A does not. C will be elected leader and transaction z+1 is lost. (note, this can
happen even if all three machines are up and C just responds quickly. in that case C would
tell A to truncate z+1 from its log.) in theory we haven't violated our 2f+1 guarantee, since
A is failed and B still hasn't recovered from failure, but it would be nice if when we don't
have quorum that system stops working rather than works incorrectly if we lose quorum.

This message was sent by Atlassian JIRA

View raw message