lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hoss Man (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (SOLR-13490) waitForState/registerCollectionStateWatcher can see stale liveNodes data due to (Zk) Watcher race condition
Date Wed, 12 Jun 2019 18:40:00 GMT

     [ https://issues.apache.org/jira/browse/SOLR-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Hoss Man updated SOLR-13490:
----------------------------
      Assignee: Hoss Man
    Attachment: SOLR-13490.patch
        Status: Open  (was: Open)

{quote}Your hypothetical watcher sounds like it wants to watch two things (live nodes and
state) and I like the solution to not conflate the two items. ...
{quote}
The problem is this isn't hypothetical: there are already lots of uses of CollectionStateWatcher
in the code base (not to mention the *additional* hypothetical end user use cases via the
existing CloudSolrClient's public API) that care about both the liveNodes and the DocCollection
instance (mostly because it's imposible to know if a replica listed in DocCollection is "active"
w/o consulting liveNodes).
{quote}The watcher can watch one or the other and query when it sees a change, ...
{quote}
The problem with "Watch X, and then ask ZkStateReader for Y when notified about X" is that
Y might not be updated in the local state until *after* the notification of X happens –
Anybody that cares about both X & Y really _must_ watch both. (the watcher could ignores
the local data in ZkStateReader and do a "force refresh" of data from ZK, but that's more
intensive on ZK then just waiting on a second watcher, and still doesn't garuntee it won't
'miss' updates to Y that happen on the quorum _after_ the watcher for X fires) ...
{quote}... or it can watch both and keep a state machine if it prefers.
{quote}
but since ZkStateReader already has (public) watch/predicate APIs that _imply_ they "watch"
both liveNodes & a DocCollection (and have callback methods that pass both as args), I'm
starting to come around to thinking that the best solution is:
 # "Fix" and Keep (and clearly document) the existing CollectionStateWatcher/Predicate based
APIs to "notify" on *both* DocCollection _and_ liveNode changes
 # For clients that don't care about liveNodes, add newer & simpler Watcher/Predicate
APIs that *only* notify on changes to the DocCollection.
 # Update javadocs to encourage people to use the most restrictive Watchers/Predicates for
their usecase

----
These changes aren't actually as hard/complex as i initially imagined they would be when i
initially speculated about them – I'm attaching a new patch that "fixes" ZkStateReader by:
 * add new DocCollectionWatcher & DocCollectionPredicate interfaces
 * add new registerDocCollectionWatcher(...), removeDocCollectionWatcher(...) & waitForState(...,
DocCollectionPredicate) impls in ZkStateReader
 * refactor the existing CollectionStateWatcher & CollectionStatePredicate methods in
ZkStateReader to be syntactic sugar around using a DocCollectionWatcher + LiveNodesListener.

On the testing side:
 * patch includes my earlier TestWaitForStateWithJettyShutdowns
 * all existing tests in TestCollectionStateWatchers still pass (AFAICT ... haven't beasted
agressively yet)
 ** added additional testing showing that liveNode changes are enough to ensure that CollectionStateWatchers
are notified.

Still TODO...
 * add DocCollectionWatcher & DocCollectionPredicate impls to CloudSolrClient and update
javadocs of existing methods similar to changes already made in ZkStateReader
 * clone TestCollectionStateWatchers into TestDocCollectionWatcher and modify to test the
new simplified API directly
 * audit all existing uses of CollectionStateWatchers and waitForState throughout the code
base:
 ** see what existing impls/callers don't care about liveNodes and can be refactored to use
the new lighter weight methods
 ** add new static factory helpers (like "clusterShape(...)") when appropriate

----
I'm planning to keep moving forward on this ... any feedback/concerns on the API/impl/docs
would be apprecaited.

> waitForState/registerCollectionStateWatcher can see stale liveNodes data due to (Zk)
Watcher race condition
> -----------------------------------------------------------------------------------------------------------
>
>                 Key: SOLR-13490
>                 URL: https://issues.apache.org/jira/browse/SOLR-13490
>             Project: Solr
>          Issue Type: Bug
>            Reporter: Hoss Man
>            Assignee: Hoss Man
>            Priority: Major
>         Attachments: SOLR-13490.patch, SOLR-13490.patch
>
>
> I was investigating some failures in {{TestCloudSearcherWarming.testRepFactor1LeaderStartup}}
which lead me to the hunch that {{waitForState}} wasn't ensuring that the predicates registered
would always be called if/when a node was shutdown.
> Digging into it a bit more, I found that the root cause seems to be the way the {{CollectionStateWatcher}}
/ {{CollectionStatePredicate}} APIs pass in *both* the {{DocCollection}}, and the "current"
{{liveNodes}} - but are only _triggered_ by the {{StateWatcher}} on the {{state.json}} (which
is used to rebuild the {{DocCollection}}) - when the {{CollectionStateWatcher}} / {{CollectionStatePredicate}}
are called, they get the "fresh" {{DocCollection}} but they get the _cached_ {{ZkStateReader.liveNodes}}
> Meanwhile, the {{LiveNodeWatcher}} only calls {{refreshLiveNodes()}} only updates {{ZkStateReader.liveNodes}}
and triggers any {{LiveNodesListener}} - it does *NOT* invoke any {{CollectionStateWatcher}}
that may have replicas hosted on any of changed nodes.
> Since there is no garunteed order that Watchers will be triggered, this means there is
a race condition where the following can happen...
>  * client1 has a ZkStateReader with cached {{liveNodes=[N1, N2, N3]}}
>  * client1 registers a {{CollectionStateWatcher}} "watcherZ" that cares if "replicaX"
of collectionA is on a "down" node
>  * client2 causes shutdown of node N1 which is hosting replicaX
>  * client1's zkStateReader gets a WatchedEvent for state.json of collectionA
>  ** DocCollection for collectionA is rebuilt
>  ** watcherZ is fired w/cached {{liveNodes=[N1, N2, N3]}} and the new DocCollection
>  *** watcherZ sees that replicaX is on N1, but thinks N1 is live
>  *** watcherZ says "everything ok, not the event i was waiting for" and doesn't take
any action
>  * client1's zkStateReader gets a WatchedEvent for LIVE_NODES_ZKNODE
>  ** zkStateReader.liveNodes is rebuilt
> ...at no point in this sequence (or after this) will watcherZ be notified fired with
the updated liveNodes (unless/until another {{state.json}} change is made for collectionA.
> ----
> While this is definitely be problematic in _tests_ that deal with node lifecyle and use
things like {{SolrCloudTestCase.waitForState(..., SolrCloudTestCase.clusterShape(...))}} to
check for the expected shards/replicas, a cursory search of how/where {{ZkStateReader.waitForState(...)}}
and {{ZkStateReader.registerCollectionStateWatcher(...)}} are used in solr-core suggests that
could also lead to bad behavior in situations like reacting to shard leader loss, waiting
for all leaders of SYSTEM_COLL to come online for upgrade, running PrepRecoveryOp, etc...
(anywhere that liveNodes is used by the watcher/predicate)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message