lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cao Manh Dat (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order
Date Tue, 10 Jan 2017 09:17:58 GMT

    [ https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15813726#comment-15813726
] 

Cao Manh Dat edited comment on SOLR-9941 at 1/10/17 9:17 AM:
-------------------------------------------------------------

Here are the reason to explain why Hoss's test fails :
- When the test add updates, it will be written to tlog-0000
- After the core is restarted, it will replay the updates from tlog-0000
- In the same time, the test add another DEQ, because the ulog is not active, DUP will write
this DEQ to another file ( tlog-0001, buffering updates) and ignore apply this DEQ to IW.
{code:title=DistributedUpdateProcess.java|borderStyle=solid}
// we're not in an active state, and this update isn't from a replay, so buffer it.
cmd.setFlags(cmd.getFlags() | UpdateCommand.BUFFERING);
ulog.add(cmd);
{code}
- After LogReplay finish, we call a commit, which will write a commit update at the end of
tlog-0001

So this DEQ update will never be replayed ( because it belong tlog-0001 not tlog-0000 ), and
it've never been applied to IW, so that DEQ update is lost.

But this do not mean we have a problem with SolrCloud mode or single instance mode. 
In SolrCloud mode, when a replica run recoverFromLog, replica in this time period will have
state = DOWN, so It won't receive any updates.

In single instance mode, we will run in leader logic, so the DEQ is applied to IW immediately
( along with update {{ulog.deleteByQueries}} ).

In TestRecovery, we kinda mixed up single instance code path along with SolrCloud mode code
path, so we end up with the fail.


was (Author: caomanhdat):
Here are the reason to explain why Hoss's test fails :
- When the test add updates, it will be written to tlog-0000
- After the core is restarted, it will replay the updates from tlog-0000
- In the same time, the test add another DEQ, because the ulog is not active, DUP will write
this DEQ to another file ( tlog-0001, buffering updates) and ignore apply this DEQ to IW.
{code:title=DistributedUpdateProcess.java|borderStyle=solid}
// we're not in an active state, and this update isn't from a replay, so buffer it.
cmd.setFlags(cmd.getFlags() | UpdateCommand.BUFFERING);
ulog.add(cmd);
{code}
- After LogReplay finish, we call a commit, which will write a commit update at the end of
tlog-0001

So this DEQ update will never be replayed ( because it belong tlog-0001 not tlog-0000 ), and
it've never been applied to IW, so that DEQ update is lost.

But this do not mean we have a problem with SolrCloud mode or single instance mode. 
In SolrCloud mode, when a replica run recoverFromLog, replica in this time period will have
state = DOWN, so It won't receive any updates.

In single instance mode, we will run in leader logic, so the DEQ is applied to IW immediately
( along with update {{ulog.deleteByQueries}} ).

But in TestRecovery, we kinda mixed up single instance code path along with SolrCloud mode
code path, so we end up with the fail.

> log replay redundently (pre-)applies DBQs as if they were out of order
> ----------------------------------------------------------------------
>
>                 Key: SOLR-9941
>                 URL: https://issues.apache.org/jira/browse/SOLR-9941
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>            Reporter: Hoss Man
>         Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, SOLR-9941.patch,
SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up (after a crash)
and tries to recover from it's tlog that causes deletes to be redundantly & excessively
applied -- at a minimum it causes confusing really log messages....
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most recent log
files found (based on numRecordsToKeep) and then builds a {{RecentUpdates}} instance from
them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used to replay
any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain to for the
various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to {{DirectUpdateHandler2}},
which (independent of the fact that we're in log replay) calls {{UpdateLog.getDBQNewer}} for
every add, looking for any "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a low level
{{IndexWriter.updateDocument}} and then immediately executes _all_ the newer DBQs ... _once
per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, because
they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 DBQs, then
*each* of those 5 DBQs will each be executed 91 times -- and for 90 of those executions, a
DUH2 INFO log messages will say {{"Reordered DBQs detected. ..."}} even tough the only reason
they are out of order is because Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message