lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark Miller (JIRA)" <>
Subject [jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs single-PQ sorting API
Date Sun, 25 Oct 2009 19:05:59 GMT


Mark Miller commented on LUCENE-1997:

Time for the reevaluations?

With the previous numbers, I would have said I'd -1 it. Now the numbers have changed. Its
less clear.

However - I'm still leaning against. I don't like the 30-50% drops even if top500,1000 are
not as common as top 10,100. Its a nasty hit for those that do it. It doesn't carry tons of
weight, but I don't like it.

I also really don't like shifting back to this API right after rolling out the new one. Its
very ugly. Its not a good precedent to set for our users. And unless we make a change in our
back compat policy, we are stuck with both API's till 4.0. Managing two API's is something
else I don't like.

Finally, creating a custom sort is an advanced operation. The far majority of Lucene users
will be good with the built in sorts. If you need a new custom one, you are into some serious
stuff already. You can handle the new API. We have seen users handle it. Uwe had ideas for
helping in that regard, and documentation can probably still be improved based on future user

I'm not as dead set against it as I was, but I still don't think I'm for the change myself.

> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>                 Key: LUCENE-1997
>                 URL:
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>    Affects Versions: 2.9
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>         Attachments: LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch, LUCENE-1997.patch
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests.  Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available).  Then
> it runs various combinations:
>   * Index with 20 balanced segments vs index with the "normal" log
>     segment size
>   * Queries with different numbers of hits (only for wikipedia index)
>   * Different top N
>   * Different sorts (by title, for wikipedia, and by random string,
>     random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept.  The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message