couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Stockton <chrisstockto...@gmail.com>
Subject Re: Bug or my lack of understanding? "Reduce output must shrink more rapidly"
Date Wed, 17 Aug 2011 00:53:29 GMT
Hello,

On Tue, Aug 16, 2011 at 5:37 PM, Randall Leeds <randall.leeds@gmail.com> wrote:
> On Tue, Aug 16, 2011 at 17:03, Chris Stockton <chrisstocktonaz@gmail.com>wrote:
>
> Since you are collecting and creating keys in the output object creating
> this single property made the output of reduce larger. CouchDB tries to
> detect reduce functions that don't actually reduce the data. If you know for
> sure that you are working with a bounded set of properties whose occurrences
> you would like to sum you may set reduce_limit=false in your configuration.
> The default is true so that users don't shoot themselves in the foot
> (especially because you cannot cancel a run-away reduce if you don't have
> access to the machine to kill the process).
>

Thanks Randall for your reply, I changed my view call to [1] and oddly
it still gives the same error, maybe I am doing something wrong? I
didn't see anywhere on couchdb wiki anything for reduce_limit.
Although I think long term that kind of scares me a little bit, if for
some reason we ran across some new data that caused a infinite reduce
due to a bug, our couchdbs would all get crippled, do I have any other
options here?

It would be great if I could impose a size limit for reduce, or even a
"minimum" size limit, as it is odd to trigger a reduce error on the
first record, making it have to run at least 100 times should be a
good test to see if the data is "shrinking" or at least remaining
constant. Not sure what to suggest here beyond that, I just think it
doesn't feel quite right, maybe someone has some better suggestion.

[1] http://<server>:59841/db_24/_design/test/_view/Grid?reduce_limit=false

Mime
View raw message