curator-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (CURATOR-495) Race and possible dead locks with RetryPolicies and several Curator Recipes
Date Mon, 17 Dec 2018 02:31:00 GMT


ASF GitHub Bot commented on CURATOR-495:

Github user cammckenzie commented on the issue:
    I have just run the tests and everything passed, which is a bit of a novelty! Still failed
at the end with that surefire issue. I think that this is good to merge though. Nice work!

> Race and possible dead locks with RetryPolicies and several Curator Recipes
> ---------------------------------------------------------------------------
>                 Key: CURATOR-495
>                 URL:
>             Project: Apache Curator
>          Issue Type: Bug
>          Components: Recipes
>    Affects Versions: 4.0.1
>            Reporter: Jordan Zimmerman
>            Assignee: Jordan Zimmerman
>            Priority: Blocker
>             Fix For: 4.1.0
> In trying to figure out why {{TestInterProcessSemaphoreMutex}} is so flakey I've come
across a fairly serious edge case in how several of our recipes work. You can see the issue
in {{InterProcessSemaphoreV2}} (which is what {{InterProcessSemaphoreMutex}} uses internally).
Look here:
> [|]
> The code synchronizes and then does {{client.getChildren()...}}. This is where the problem
is. If there are connection problems inside of getChildren() the retry policy will do configured
sleeping, retries, etc. Importantly, this is all done while the thread doing the retries holds
InterProcessSemaphoreV2's monitor. If the ZK connection is repaired past the session timeout,
ZK will eventually call InterProcessSemaphoreV2's watcher with an Expired message. InterProcessSemaphoreV2's
watcher calls this method:
> {code}
> private synchronized void notifyFromWatcher()
> {
>     notifyAll();
> }
> {code}
> You can see that this is a race. The thread doing "getChildren" is holding the monitor
and is in a retry loop waiting for the connection to be repaired. However, ZK's event loop
is trying to obtain that same monitor as a result of trying to call the synchronized notifyFromWatcher().
This means that the retry policy will always fail because ZK's event loop is tied up until
that thread exists. Worse still, if someone were to use a retry policy of "RetryForever" they'd
have a deadlock.
> This pattern is in about 10 files or so. I'm trying to think of a workaround. One possibility
is to use a separate thread for this type of notification. i.e. notifyFromWatcher() would
just signal another thread that the notifyAll() needs to be called. This would unblock ZK's
event thread so that things can progress. I'll play around with this.

This message was sent by Atlassian JIRA

View raw message