incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Stockton <chrisstockto...@gmail.com>
Subject Re: Bug or my lack of understanding? "Reduce output must shrink more rapidly"
Date Thu, 18 Aug 2011 18:08:06 GMT
Hello,

On Wed, Aug 17, 2011 at 10:55 AM, Robert Newson <rnewson@apache.org> wrote:
> The reduce_limit heuristic is there to save you from writing bad
> reduce functions that are destined to fail in production as document
> count increases. The result of a reduce call should be strictly
> smaller than the input size (and preferably a lot smaller).
>
> If the number of keys in the returned object is fixed, you'll probably
> be fine, though testing with a sizeable number of documents (and
> graphing the performance curve) will prove it.
>
> B.
>

Hello Robert,

Basically that is what I'm confused about, the response is a fixed
size statistical aggregation, it does not grow in length, only
increments scalar values. The structure on the first pass is the same
size as the last. I think the view exiting on the first iteration is
undesired behavior and a higher sampling of data / slightly more
forgiving algorithm should be put in place for this. If this simply
won't happen or no one else agrees I will remove the reduce limit..
but I feel like that could be very dangerous and almost would rather
create a patch for our CouchDB's before I did that.

-Chris

Mime
View raw message