lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hoss Man (JIRA)" <>
Subject [jira] Commented: (LUCENE-1316) Avoidable synchronization bottleneck in MatchAlldocsQuery$MatchAllScorer
Date Sat, 28 Jun 2008 00:57:45 GMT


Hoss Man commented on LUCENE-1316:

bq. TermDocs instance returned cannot be used to seek to a different term. However, this is
minor and not a back compatibility concern since "null" was not previously a supported value.

so essentially this approach only improves MatchAllDocsQuery correct? .... Other use cases
where lots of termDoc enumeration take place (RangeFilter and PrefixFilter type code) wouldn't'
benefit from this at all.

Assuming the genuinely eliminating the synchronization is infeasible, the other approach that
occurred to me along the lines of a "read only" IndexReader is that if we started providing
a public method for getting the list of all deleted docs (either as a BitVector or as a DocIdSet
or something) then it would be easy to implement a SnapshotFilteredIndexReader that on construction
cached the current list of all deleted docs in the IndexReader it's wrapping, used it for
all isDeleted() methods, and proxied all other methods to the underlying reader.

then things like MatchAllDocs, RangeFilter, and PrefixFilter could (optionally?) construct
one of those prior to their big iteration loops, and use it in place of the original IndexReader.
 Trade the syncro bottleneck for deletion data as of the moment the query was started (a fair
trade for most people)

> Avoidable synchronization bottleneck in MatchAlldocsQuery$MatchAllScorer
> ------------------------------------------------------------------------
>                 Key: LUCENE-1316
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Bug
>          Components: Query/Scoring
>    Affects Versions: 2.3
>         Environment: All
>            Reporter: Todd Feak
>            Priority: Minor
>         Attachments: LUCENE_1316.patch, LUCENE_1316.patch, LUCENE_1316.patch,
>   Original Estimate: 1h
>  Remaining Estimate: 1h
> The isDeleted() method on IndexReader has been mentioned a number of times as a potential
synchronization bottleneck. However, the reason this  bottleneck occurs is actually at a higher
level that wasn't focused on (at least in the threads I read).
> In every case I saw where a stack trace was provided to show the lock/block, higher in
the stack you see the method. In Solr paricularly, this scorer is used
for "NOT" queries. We saw incredibly poor performance (order of magnitude) on our load tests
for NOT queries, due to this bottleneck. The problem is that every single document is run
through this isDeleted() method, which is synchronized. Having an optimized index exacerbates
this issues, as there is only a single SegmentReader to synchronize on, causing a major thread
pileup waiting for the lock.
> By simply having the MatchAllScorer see if there have been any deletions in the reader,
much of this can be avoided. Especially in a read-only environment for production where you
have slaves doing all the high load searching.
> I modified line 67 in the MatchAllDocsQuery
>   if (!reader.isDeleted(id)) {
> TO:
>   if (!reader.hasDeletions() || !reader.isDeleted(id)) {
> In our micro load test for NOT queries only, this was a major performance improvement.
 We also got the same query results. I don't believe this will improve the situation for indexes
that have deletions. 
> Please consider making this adjustment for a future bug fix release.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message