lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Fri, 23 Oct 2009 04:13:59 GMT


Mark Miller commented on LUCENE-1997:

Another run:

I made the changes to int/string comparator to do the faster compare.
Java 1.5.0_20
Quad Core - 2.0 Ghz
Ubuntu 9.10 Kernel 2.6.31

||Seg size||Query||Tot hits||Sort||Top N||QPS old||QPS new||Pct change||
|log|<all>|1000000|rand string|10|115.32|101.53|{color:red}-12.0%{color}|
|log|<all>|1000000|rand string|25|115.22|91.82|{color:red}-20.3%{color}|
|log|<all>|1000000|rand string|50|114.40|89.70|{color:red}-21.6%{color}|
|log|<all>|1000000|rand string|100|91.30|81.04|{color:red}-11.2%{color}|
|log|<all>|1000000|rand string|500|76.31|43.94|{color:red}-42.4%{color}|
|log|<all>|1000000|rand string|1000|67.33|28.29|{color:red}-58.0%{color}|
|log|<all>|1000000|rand int|10|118.47|109.30|{color:red}-7.7%{color}|
|log|<all>|1000000|rand int|25|118.72|99.37|{color:red}-16.3%{color}|
|log|<all>|1000000|rand int|50|118.25|95.14|{color:red}-19.5%{color}|
|log|<all>|1000000|rand int|100|97.57|83.39|{color:red}-14.5%{color}|
|log|<all>|1000000|rand int|500|86.55|46.21|{color:red}-46.6%{color}|
|log|<all>|1000000|rand int|1000|78.23|28.94|{color:red}-63.0%{color}|

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message