lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bill Bell (JIRA)" <>
Subject [jira] Commented: (SOLR-2218) Performance of start= and rows= parameters are exponentially slow with large data sets
Date Sat, 06 Nov 2010 01:58:42 GMT


Bill Bell commented on SOLR-2218:


I know how to do that. That is not the issue. Let me explain again.

This is a performance issue.

When you loop through results "deeply" the performance of the results get SLOWER and SLOWER.

1. http://hostname/solr/select?fl=id&start=0&rows=1000&q=*:*
<int name="QTime">2</int>

2. http://hostname/solr/select?fl=id&start=10000&rows=1000&q=*:*
<int name="QTime">8</int>

3. http://hostname/solr/select?fl=id&start=20000&rows=1000&q=*:*
<int name="QTime">38</int>

It keeps getting slower!!

We need it to be consistently fast at QTIME=2.

Any solutions?

> Performance of start= and rows= parameters are exponentially slow with large data sets
> --------------------------------------------------------------------------------------
>                 Key: SOLR-2218
>                 URL:
>             Project: Solr
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.4.1
>            Reporter: Bill Bell
> With large data sets, > 10M rows.
> Setting start=<large number> and rows=<large numbers> is slow, and gets slower
the farther you get from start=0 with a complex query. Random also makes this slower.
> Would like to somehow make this performance faster for looping through large data sets.
It would be nice if we could pass a pointer to the result set to loop, or support very large
> Something like:
> rows=1000
> start=0
> spointer=string_my_query_1
> Then within interval (like 5 mins) I can reference this loop:
> Something like:
> rows=1000
> start=1000
> spointer=string_my_query_1
> What do you think? Since the data is too great the cache is not helping.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message