lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Fri, 23 Oct 2009 04:55:59 GMT


Mark Miller commented on LUCENE-1997:

Same system, Java 1.6.0_15

||Seg size||Query||Tot hits||Sort||Top N||QPS old||QPS new||Pct change||
|log|<all>|1000000|rand string|10|107.78|106.09|{color:red}-1.6%{color}|
|log|<all>|1000000|rand string|25|103.09|102.53|{color:red}-0.5%{color}|
|log|<all>|1000000|rand string|50|106.42|95.17|{color:red}-10.6%{color}|
|log|<all>|1000000|rand string|100|86.28|85.41|{color:red}-1.0%{color}|
|log|<all>|1000000|rand string|500|76.69|37.76|{color:red}-50.8%{color}|
|log|<all>|1000000|rand string|1000|68.48|22.95|{color:red}-66.5%{color}|
|log|<all>|1000000|rand int|10|120.59|112.03|{color:red}-7.1%{color}|
|log|<all>|1000000|rand int|25|119.80|107.49|{color:red}-10.3%{color}|
|log|<all>|1000000|rand int|50|119.96|98.84|{color:red}-17.6%{color}|
|log|<all>|1000000|rand int|100|88.58|89.24|{color:green}0.7%{color}|
|log|<all>|1000000|rand int|500|83.50|40.13|{color:red}-51.9%{color}|
|log|<all>|1000000|rand int|1000|74.80|23.83|{color:red}-68.1%{color}|

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message