lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Shai Erera (JIRA)" <j...@apache.org>
Subject [jira] Commented: (LUCENE-1593) Optimizations to TopScoreDocCollector and TopFieldCollector
Date Sun, 03 May 2009 20:04:30 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12705441#action_12705441
] 

Shai Erera commented on LUCENE-1593:
------------------------------------

Actually, if you request to "sort-by-score" and ask for a scoring Collector, the score() method
will be hit twice - once in tfc.collect(), which does not use a caching scorer. and 2nd in
RelComp.copy()/compareTo(), which does use a caching scorer. If we want to handle it, although
that is somewhat more of an edge case, I suggest that we check in TFC.create() whether any
of the scorers is of type SortField.FIELD_SCORE and if so wrap the scorer given to setScorer
with ScoreCachingWrapperScorer, and remove that wrapping from RelevanceComparator. That way,
both Collector and Comparator will use the same caching scorer.

Also, we can always create a ScoringNoMaxScore collector in such cases, since if we're going
to compute the score, why not save it? I'm not sure about it since it will violate the API,
i.e. you asked for a non-scoring collector and get a scoring one just because one of your
sort fields was of type "sort-by-score". But then again, it is really an edge case, and I'm
not sure why would someone want to do it.

> Optimizations to TopScoreDocCollector and TopFieldCollector
> -----------------------------------------------------------
>
>                 Key: LUCENE-1593
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1593
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Shai Erera
>             Fix For: 2.9
>
>         Attachments: LUCENE-1593.patch, LUCENE-1593.patch, PerfTest.java
>
>
> This is a spin-off of LUCENE-1575 and proposes to optimize TSDC and TFC code to remove
unnecessary checks. The plan is:
> # Ensure that IndexSearcher returns segements in increasing doc Id order, instead of
numDocs().
> # Change TSDC and TFC's code to not use the doc id as a tie breaker. New docs will always
have larger ids and therefore cannot compete.
> # Pre-populate HitQueue with sentinel values in TSDC (score = Float.NEG_INF) and remove
the check if reusableSD == null.
> # Also move to use "changing top" and then call adjustTop(), in case we update the queue.
> # some methods in Sort explicitly add SortField.FIELD_DOC as a "tie breaker" for the
last SortField. But, doing so should not be necessary (since we already break ties by docID),
and is in fact less efficient (once the above optimization is in).
> # Investigate PQ - can we deprecate insert() and have only insertWithOverflow()? Add
a addDummyObjects method which will populate the queue without "arranging" it, just store
the objects in the array (this can be used to pre-populate sentinel values)?
> I will post a patch as well as some perf measurements as soon as I have them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


Mime
View raw message