incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alon Keren <>
Subject Re: Reduce just N rows?
Date Sun, 15 Apr 2012 19:31:49 GMT
On 15 April 2012 21:06, Mark Hahn <> wrote:

>  >    would at least reach thousands, so fetching all keys is quite
> demanding
> My suggestion may well be the wrong path to take, but I'd like to point out
> that fetching thousands of keys is nothing.  Getting 16 kbytes of data
> takes a few ms.  And  internally couch has all the keys already sorted and
> ready to dump when you ask for it.  It's not like this 16K is going across
> the wire to the client.

Latency indeed shouldn't be an issue, but I do wonder about the amount of
CPU my particular scenario would use.

> I use this kind of query all the time.  However, using a reduce would be
> much better.  You could keep a list of the ten lowest values found so far.
>  That is a finite amount of data and legal for a reduce.

...this is an interesting idea.
If I hard-code the limit into the reduce function, perhaps I could indeed
ignore the rest of the rows once I hit it. Since the rows I ignore in the
reduce should never be updated, maybe they won't be incorporated into
future reduce calculations, and thus not cost anything?
Also, how would group-reduce treat this scheme?

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message