kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Neha Narkhede (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (KAFKA-532) Multiple controllers can co-exist during soft failures
Date Fri, 02 Nov 2012 00:50:13 GMT

    [ https://issues.apache.org/jira/browse/KAFKA-532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13489181#comment-13489181
] 

Neha Narkhede commented on KAFKA-532:
-------------------------------------

>> 34. KafkaController: There seems to be a tricky issue with incrementing the controller
epoc. We increment epoc in onControllerFailover() after the broker becomes a controller. What
could happen is that broker 1 becomes the controller and goes to GC before we increment the
epoc. Broker 2 becomes the new controller and increments the epoc. Broker 1 comes back from
gc and increments epoc again. Now, broker 1's controller epoc is actually larger. Not sure
what's the best way to address this. One thought is that immediately after controller epoc
is incremented in onControllerFailover(), we check if this broker is still the controller
(by reading the controller path in ZK). If not, we throw an exception. Also, epoc probably
should be initialized to 0 if we want the first controller to have epoc 1.

Good point, I missed this earlier. But just raising an exception after writing to the zk path
might not be the best solution. This is because it breaks the guarantee that the active controller
has the largest epoch in the system. I can't think of an example that would lead to a bug,
but it gives me a feeling that this could cause unforeseen-hard-to-debug issues in the future.
On the surface, it doesn't seem to be incorrect. However, here is another solution that seems
to cover the corner cases -

Every broker registers a watch on the /controllerEpoch persistent path and caches the latest
controller epoch and zk version. When a broker becomes controller, it uses this cached zk
version to do the conditional write. Now, in the event that another controller takes over
when the current controller goes into GC after election, the new controller will use the previous
controller's zk version and successfully update the zk path. When the older controller comes
back, it will try to use a stale zk version and its zookeeper write will fail. It will not
be able to update the new zk version before the write since they are guarded by the same lock.

If this sounds good, I will upload another patch that includes the fix

                
> Multiple controllers can co-exist during soft failures
> ------------------------------------------------------
>
>                 Key: KAFKA-532
>                 URL: https://issues.apache.org/jira/browse/KAFKA-532
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8
>            Reporter: Neha Narkhede
>            Assignee: Neha Narkhede
>            Priority: Blocker
>              Labels: bugs
>         Attachments: kafka-532-v1.patch, kafka-532-v2.patch, kafka-532-v3.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> If the current controller experiences an intermittent soft failure (GC pause) in the
middle of leader election or partition reassignment, a new controller might get elected and
start communicating new state change decisions to the brokers. After recovering from the soft
failure, the old controller might continue sending some stale state change decisions to the
brokers, resulting in unexpected failures. We need to introduce a controller generation id
that increments with controller election. The brokers should reject any state change requests
by a controller with an older generation id.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message