couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oliver Dain <opub...@dains.org>
Subject Re: Why do couchdb reduce functions have to be commutative
Date Tue, 03 Dec 2013 18:00:30 GMT
Hi Robert,

Thanks very much for the reply. That makes sense.

I gather this means that if I'm running a single server, at least with
today's code, commutative isn't required? If so, is that something I can
count on? For example, if I know my application is quite small and will
never be sharded, is it safe for me to use a non-commutative reduce?

Thanks,
Oliver


On Tue, Dec 3, 2013 at 9:57 AM, Oliver Dain <oliver@dains.org> wrote:

> Because the order that we pass keys and values to the reduce function
> is not defined. In sharded situations (like bigcouch, which is being
> merged) an intermediate reduce value on an effectively random subset
> of keys/values is generated at each node and a final rereduce is done
> on all the intermediates. The constraints on reduce functions exist in
> anticipation of clustering.
>
> B.
>
>
> On 1 December 2013 21:45, Oliver Dain <opublic@dains.org> wrote:
> > Hey CouchDB users,
> >
> > I've just started messing around with CouchDB and I understand why CouchDB
> > reduce functions need to be associative, but I don't understand why they
> > also have to be commutative. I posted a much more detailed version of this
> > question to StackOverflow yesterday, but haven't gotten an answer yet (my
> > SO experience says that means I probably won't ever get one). Figured it
> > might be smart to explicitly loop in the couch community.
> >
> > The original StackOverflow question is here:
> >
> > http://stackoverflow.com/questions/20303355/why-do-couchdb-reduce-functions-have-to-be-commutative
> >
> > Any thoughts would be appreciated!
> >
> > Thanks,
> > Oliver
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message