lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Sun, 25 Oct 2009 20:43:59 GMT


Mark Miller commented on LUCENE-1997:

Given good enough reasons, I could see saying we made a mistake and switching back - as it
is, for the reasons I've said, I don't find that to be the case. I don't feel the new API
was a mistake yet. 

Lots of other guys to weigh in though. If everyone else feels like its the right move, I'm
not going to -1 it - just weighing in with how I feel. 

I'm not seeing 10-20% faster across the board - on my system it doesnt even hit 10% and I'm
a linux user and advocate. I'm all for performance, but < 10% here and there is not enough
to sway me against 30-50% loses in the large queue cases, combined with having to shift back.
Its not a clear win either way, but I've said which way I lean.

Luckily, its not just me you have to convince. Lots of smart people still to weigh in.

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message