curator-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuri Tceretian (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CURATOR-504) Race conditions in LeaderLatch after reconnecting to ensemble
Date Tue, 05 Feb 2019 20:13:00 GMT

     [ https://issues.apache.org/jira/browse/CURATOR-504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yuri Tceretian updated CURATOR-504:
-----------------------------------
    Attachment: XP91JuD048Nl_8h9NZpH01QZJMfCLewjfd2eQNfOsR6GuApPNVJlkonWQLBR8pjltdpp1gtUMsE1VOgQmXn95fL68Ha80taHb8hAsY0mtcfXbGwv8iBIrefoAook18a3wZ3o3JnV7JkyO_QDP2UMMH44Jr7WX50Rs6JyPRtEU15aHVUzLXAViK6BqidYAEhUmqS7iZYWXRAg6wcHO9ViWznti2-jIZIcfWCRJ5Z8N7CI_KrX.png

> Race conditions in LeaderLatch after reconnecting to ensemble
> -------------------------------------------------------------
>
>                 Key: CURATOR-504
>                 URL: https://issues.apache.org/jira/browse/CURATOR-504
>             Project: Apache Curator
>          Issue Type: Bug
>    Affects Versions: 4.1.0
>            Reporter: Yuri Tceretian
>            Assignee: Jordan Zimmerman
>            Priority: Minor
>         Attachments: 51868597-65791000-231c-11e9-9bfa-1def62bc3ea1.png, Screen Shot 2019-01-31
at 10.26.59 PM.png, XP91JuD048Nl_8h9NZpH01QZJMfCLewjfd2eQNfOsR6GuApPNVJlkonWQLBR8pjltdpp1gtUMsE1VOgQmXn95fL68Ha80taHb8hAsY0mtcfXbGwv8iBIrefoAook18a3wZ3o3JnV7JkyO_QDP2UMMH44Jr7WX50Rs6JyPRtEU15aHVUzLXAViK6BqidYAEhUmqS7iZYWXRAg6wcHO9ViWznti2-jIZIcfWCRJ5Z8N7CI
(1).png, XP91JuD048Nl_8h9NZpH01QZJMfCLewjfd2eQNfOsR6GuApPNVJlkonWQLBR8pjltdpp1gtUMsE1VOgQmXn95fL68Ha80taHb8hAsY0mtcfXbGwv8iBIrefoAook18a3wZ3o3JnV7JkyO_QDP2UMMH44Jr7WX50Rs6JyPRtEU15aHVUzLXAViK6BqidYAEhUmqS7iZYWXRAg6wcHO9ViWznti2-jIZIcfWCRJ5Z8N7CI_KrX.png
>
>
> We use LeaderLatch in a lot of places in our system and when ZooKeeper ensemble is unstable
and clients are reconnecting to logs are full of messages like the following:
> {{[2017-08-31 19:18:34,562][ERROR][org.apache.curator.framework.recipes.leader.LeaderLatch]
Can't find our node. Resetting. Index: -1 {}}}
> According to the [implementation|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L529-L536],
this can happen in two cases:
>  * When internal state `ourPath` is null
>  * When the list of latches does not have the expected one.
> I believe we hit the first condition because of races that occur after client reconnects
to ZooKeeper.
>  * Client reconnects to ZooKeeper and LeaderLatch gets the event and calls reset method
which set the internal state (`ourPath`) to null, removes old latch and creates a new one.
This happens in thread "Curator-ConnectionStateManager-0".
>  * Almost simultaneously, LeaderLatch gets another even NodeDeleted ([here|https://github.com/apache/curator/blob/4251fe328908e5fca37af034fabc190aa452c73f/curator-recipes/src/main/java/org/apache/curator/framework/recipes/leader/LeaderLatch.java#L543-L554])
and tries to re-read the list of latches and check leadership. This happens in the thread
"main-EventThread".
> Therefore, sometimes there is a situation when method `checkLeadership` is called when
`ourPath` is null.
> Below is an approximate diagram of what happens:
> !51868597-65791000-231c-11e9-9bfa-1def62bc3ea1.png|width=1261,height=150!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message