incubator-couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Anderson <jch...@apache.org>
Subject Re: reduce_limit error
Date Tue, 05 May 2009 22:21:55 GMT
On Tue, May 5, 2009 at 2:17 PM, Brian Candler <B.Candler@pobox.com> wrote:
> On Tue, May 05, 2009 at 01:19:10PM -0700, Chris Anderson wrote:
>> It looks like this reduce would eventually
>> overwhelm the interpreter, as your set of hash keys looks like it may
>> grow without bounds as it encounters more data.
>
> As you can probably see, it's counting IP address prefixes, and it's
> bounded. Even encountering all possible IPv4 prefixes (/0 to /32) and IPv6
> (/0 to /128), there will be never be any more than 162 keys in the hash.
>
>> Perhaps I'm wrong. 200 bytes is a bit small, but I'd be worried that
>> with 4kb users wouldn't get a warning until they had moved a "bad"
>> reduce to production data.
>
> It's not so much a warning as a hard error :-)
>
>> If your reduce is ok even on giant data sets, maybe you can experiment
>> with the minimum value in share/server/views.js line 52 that will
>> allow you to proceed.
>
> In my case, I'm happy to turn off the checking entirely. I was just
> following the request in default.ini:
>
> ; If you think you're hitting reduce_limit with a "good" reduce function,
> ; please let us know on the mailing list so we can fine tune the heuristic.
>

Gotcha, your reduce seems ok given the bounded nature of the data set.
Still I'm not clear why you don't just have a map with keys like:

["v4","24","201","121","68"]

and then get your counts using group_level = 2 and a standard count.




-- 
Chris Anderson
http://jchrisa.net
http://couch.io

Mime
View raw message