lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uwe Schindler (Issue Comment Edited) (JIRA)" <j...@apache.org>
Subject [jira] [Issue Comment Edited] (LUCENE-1536) if a filter can support random access API, we should use it
Date Mon, 10 Oct 2011 05:18:29 GMT

    [ https://issues.apache.org/jira/browse/LUCENE-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13123900#comment-13123900
] 

Uwe Schindler edited comment on LUCENE-1536 at 10/10/11 5:16 AM:
-----------------------------------------------------------------

bq. Before they were not applied in the filter, but everywhere else in the query. Now they
are applied once per query

Sorry this is only correct for the iterator based advancing. For the filter-down-low approach
they are of-course still applied. But still we should show benchmarks, that this really hurts.
Because caching acceptDocs (not liveDocs!!!!) is very hard to do. Of course a chained FilteredQuery
with lots of chanined filters could be simplier (only one static BitSet cached for the whole
filter chain).

Robert and me had more ideas how to optimize the always appliying acceptDocs case in every
scorer: E.g. ConjunctionTermScorer could pass null down for all but one sub-scorer. Ideally
the one that gets the liveDocs should be the one with lowest docFreq. The others don't need
liveDocs, as the lowDocFreq scorer already applied them and they can never be appear in hits,
because the other scorers would then advance over it. We should open new issues for those
optimizations.
                
      was (Author: thetaphi):
    bq. Before they were not applied in the filter, but everywhere else in the query. Now
they are applied once per query

Sorry this is only correct for the iterator based advancing. For the filter-down-low approach
they are of-course still applied. But still we should show benchmarks, that this really hurts.
Because caching acceptDocs (not liveDocs!!!!) is very hard to do. Of course a chained FilteredQuery
with lots of chanined filters could be simplier (only one static BitSet cached for the whole
filter chain).

Robert and me had more ideas how to optimize the always appliying acceptDocs case in every
scorer: E.g. ConjunctionTermScorer could pass null down for all but one sub-scorer. Ideally
the one that gets the liveDocs should be the one with lowest docFreq. The others don't need
liveDocs, as the lowDocFreq scorer already applied them and they can never be applied to hits.
We should open new issues for those optimizations.
                  
> if a filter can support random access API, we should use it
> -----------------------------------------------------------
>
>                 Key: LUCENE-1536
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1536
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: core/search
>    Affects Versions: 2.4
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>              Labels: gsoc2011, lucene-gsoc-11, mentor
>             Fix For: 4.0
>
>         Attachments: CachedFilterIndexReader.java, LUCENE-1536-rewrite.patch, LUCENE-1536-rewrite.patch,
LUCENE-1536-rewrite.patch, LUCENE-1536-rewrite.patch, LUCENE-1536-rewrite.patch, LUCENE-1536-rewrite.patch,
LUCENE-1536-rewrite.patch, LUCENE-1536-rewrite.patch, LUCENE-1536.patch, LUCENE-1536.patch,
LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch,
LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch,
LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch,
LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch,
LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, LUCENE-1536.patch, changes-yonik-uwe.patch,
luceneutil.patch
>
>
> I ran some performance tests, comparing applying a filter via
> random-access API instead of current trunk's iterator API.
> This was inspired by LUCENE-1476, where we realized deletions should
> really be implemented just like a filter, but then in testing found
> that switching deletions to iterator was a very sizable performance
> hit.
> Some notes on the test:
>   * Index is first 2M docs of Wikipedia.  Test machine is Mac OS X
>     10.5.6, quad core Intel CPU, 6 GB RAM, java 1.6.0_07-b06-153.
>   * I test across multiple queries.  1-X means an OR query, eg 1-4
>     means 1 OR 2 OR 3 OR 4, whereas +1-4 is an AND query, ie 1 AND 2
>     AND 3 AND 4.  "u s" means "united states" (phrase search).
>   * I test with multiple filter densities (0, 1, 2, 5, 10, 25, 75, 90,
>     95, 98, 99, 99.99999 (filter is non-null but all bits are set),
>     100 (filter=null, control)).
>   * Method high means I use random-access filter API in
>     IndexSearcher's main loop.  Method low means I use random-access
>     filter API down in SegmentTermDocs (just like deleted docs
>     today).
>   * Baseline (QPS) is current trunk, where filter is applied as iterator up
>     "high" (ie in IndexSearcher's search loop).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message