couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Randall Leeds <>
Subject Re: Bug or my lack of understanding? "Reduce output must shrink more rapidly"
Date Wed, 17 Aug 2011 01:29:45 GMT
On Tue, Aug 16, 2011 at 17:53, Chris Stockton <>wrote:

> Hello,
> On Tue, Aug 16, 2011 at 5:37 PM, Randall Leeds <>
> wrote:
> > On Tue, Aug 16, 2011 at 17:03, Chris Stockton <
> >wrote:
> >
> > Since you are collecting and creating keys in the output object creating
> > this single property made the output of reduce larger. CouchDB tries to
> > detect reduce functions that don't actually reduce the data. If you know
> for
> > sure that you are working with a bounded set of properties whose
> occurrences
> > you would like to sum you may set reduce_limit=false in your
> configuration.
> > The default is true so that users don't shoot themselves in the foot
> > (especially because you cannot cancel a run-away reduce if you don't have
> > access to the machine to kill the process).
> >
> Thanks Randall for your reply, I changed my view call to [1] and oddly
> it still gives the same error, maybe I am doing something wrong? I
> didn't see anywhere on couchdb wiki anything for reduce_limit.
> Although I think long term that kind of scares me a little bit, if for
> some reason we ran across some new data that caused a infinite reduce
> due to a bug, our couchdbs would all get crippled, do I have any other
> options here?
> It would be great if I could impose a size limit for reduce, or even a
> "minimum" size limit, as it is odd to trigger a reduce error on the
> first record, making it have to run at least 100 times should be a
> good test to see if the data is "shrinking" or at least remaining
> constant. Not sure what to suggest here beyond that, I just think it
> doesn't feel quite right, maybe someone has some better suggestion.
> [1] http://<server>:59841/db_24/_design/test/_view/Grid?reduce_limit=false

After this I'll tell you about how you change that setting, but you should
consider restructuring your map/reduce:

For example, instead of building an object with these counts in memory and
trying to reduce them over reduce/rereduce just emit multiple rows.

for (var col in doc) {
   emit(col, 1);


This way you can use the built-in reduction by specifying just the string
"_sum" as your reduce, which is much more efficient than doing it yourself.
Also, you don't hit reduce limit.

Anyway, in case you *do* work with your own installation and want to break
the reduce limit sometime, here's how:

If you look in default.ini you will see the section [query_server_config]
with reduce_limit = true.
You could put something like this in your local.ini:

reduce_limit = false

If you don't have access to the box you should be able to issue:
PUT http://<server>/_config/query_server_config/reduce_limit
The body of the request should be the quoted json string "false".

For example, with cURL, you might do:
curl -XPUT -H"Content-Type: application/json" -d'"false"' http://
(Note that the data here is single and double quoted to ensure the double
quotes are passed as part of the body and not removed by the shell.)

If you get an error, e.g., because you're using IrisCouch or something other
service which locks down the installation a bit, you'll have to contact
their support.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message