incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <rnew...@apache.org>
Subject Re: Scaling with filtered replication
Date Tue, 09 Jul 2013 18:09:03 GMT
The processing for the filter makes the underlying exponential growth
hurt sooner, yes, but I took the question as written. If you didn't
have filters at all, but still had n^2 replications, you've still got
a scaling problem, it's just not directly related to the filtering
overhead.

B.


On 9 July 2013 19:03, Jens Alfke <jens@couchbase.com> wrote:
>
> On Jul 9, 2013, at 8:50 AM, Robert Newson <rnewson@apache.org> wrote:
>
>> It's not true. Passing replication through a filter is a linear
>> slowdown (the cost of passing the document to spidermonkey for
>> evaluation), nothing more. Filtered replication is as
>> incremental/resumable as non-filtered replication.
>
> I’ve heard from mobile-app developers for whom this has become a scaling problem. It’s
a linear slowdown, yes, but the CPU time to run the JS function is multiplied by the number
of clients squared times the number of doc updates each client produces, since every client
sees the updates from every other client.
>
> (That’s assuming the clients are always online and replicating. If not, multiple updates
to the same doc in between replications will get coalesced, lowering the workload.)
>
> From the n^2 factor in the number of clients, I’d guess that this is less of an issue
for a server-to-server setup where the number of database instances isn’t too big, but gets
bad when you get to thousands or hundreds of thousands of clients.
>
> —Jens

Mime
View raw message