lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shai Erera (JIRA)" <>
Subject [jira] Commented: (LUCENE-1593) Optimizations to TopScoreDocCollector and TopFieldCollector
Date Fri, 01 May 2009 18:47:30 GMT


Shai Erera commented on LUCENE-1593:

bq. And default Collector.acceptsDocsOutOfOrder should return false.

Do you propose it for back-compat reasons or simply because it makes sense. Collector is not
released yet so we can define that method abstract.

bq. Thoughts on collapsing down all of these classes to 1 or 2 for Lucy in a post to the Lucy
dev list entitled "SortCollector".

I read it, but I'm not sure I agree with everything that you write there. I need to re-read
it more carefully though before I can comment on it. One thing that caught my eye is that
you write "I found one additional inefficiency in the Lucene implementation: score() is called
twice for "competitive" docs". Where exactly did you see it? I checked TFC's code again and
score() is never called twice. RelevanceComparator wraps the given Scorer with a ScoreCachingWrapperScorer,
so the score() calls return almost immediately, without computing any scores.

This was a tradeoff we've made because of the TFC instances that don't compute documents scores,
and so we removed the score parameter from FieldComparator.copy() and compareBottom(). We
could have added it back and pass in the not-scoring versions Float.NEG_INF, but that will
not work well, since we should really compute the document's score if one of the SortField
is RELEVANCE ... hmm - maybe we can change TFC.create() to check the doc fields and if one
of them is RELEVANCE return a ScoringNoMaxScore collector version, and then we should be safe
with adding score back to those methods signature?

> Optimizations to TopScoreDocCollector and TopFieldCollector
> -----------------------------------------------------------
>                 Key: LUCENE-1593
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Shai Erera
>             Fix For: 2.9
>         Attachments: LUCENE-1593.patch, LUCENE-1593.patch,
> This is a spin-off of LUCENE-1575 and proposes to optimize TSDC and TFC code to remove
unnecessary checks. The plan is:
> # Ensure that IndexSearcher returns segements in increasing doc Id order, instead of
> # Change TSDC and TFC's code to not use the doc id as a tie breaker. New docs will always
have larger ids and therefore cannot compete.
> # Pre-populate HitQueue with sentinel values in TSDC (score = Float.NEG_INF) and remove
the check if reusableSD == null.
> # Also move to use "changing top" and then call adjustTop(), in case we update the queue.
> # some methods in Sort explicitly add SortField.FIELD_DOC as a "tie breaker" for the
last SortField. But, doing so should not be necessary (since we already break ties by docID),
and is in fact less efficient (once the above optimization is in).
> # Investigate PQ - can we deprecate insert() and have only insertWithOverflow()? Add
a addDummyObjects method which will populate the queue without "arranging" it, just store
the objects in the array (this can be used to pre-populate sentinel values)?
> I will post a patch as well as some perf measurements as soon as I have them.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message