lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Earwin Burrfoot (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Thu, 29 Oct 2009 08:10:59 GMT


Earwin Burrfoot commented on LUCENE-1997:

bq. One thing that bothers me about multiPQ is the memory usage if you start paging deeper
and have many segments. I've seen up to 100 segments in production systems. 100x the memory
use isn't pretty. 
That's 100x the memory only for heaps, plus memory for Comparables - not nice.

bq. What kind of comparator can't pre-create a fixed ordinal list for all the possible values?
Any comparator that has query-dependent ordering. Distance sort (of any kind, be it geo, or
just any kind of value being close to your sample) for instance.

bq. I think the only time the ordinal list can't be created is when the source array contains
some value that can't be compared against another value - e.g. some variant on NULL - or when
the comparison function is broken, e.g. when a < b and b < c but c > a.
With such comparison function you're busted anyway - the order of your hits is dependent on
segment traversal order for instance. If you sharded your search - it depends on the order
your shards responded to meta-search. Ugly.

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch,
LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message