couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Davis <>
Subject Re: couchdb transactions changes
Date Tue, 10 Feb 2009 00:41:04 GMT
On Mon, Feb 9, 2009 at 7:23 PM, Antony Blakey <> wrote:
> On 10/02/2009, at 10:30 AM, Paul Davis wrote:
>> Alas it is true, if you have a system that has operating
>> characteristics that involved a write load that far outweighs the read
>> load
> I don't think it has to far outweigh it, just exceed it. And in any case,
> that's assuming that there's a single reader. With replication you might
> have many readers doing duplicate reads, which means that the effective read
> rate w.r.t the write rate is divided by the number readers.
> For example consider that you have 100 replication readers. Each reader will
> advance at 1/100'th the rate of a single reader. So the write rate only has
> to be > 1% of the theoretical read rate to exceed the effective read
> rate/progress of the individual replicators.
> And that's ignoring the fact that the effective read rate for replication is
> dependent on bandwidth and connectivity e.g throughput is not the same as
> raw read rate in any case.

Actually, seeing as that file system reads would get cache hits 100%
of the time and your disk write throughput would be limited to 60.45%
coupled with the fact that a write rate is less than 1% of the
available bandwidth and if the moon is in it's third phase we can
definitively prove that hand waving about numbers proves nothing.

On the other hand if someone were to benchmark this then we could
discuss the reality of what sustainable read/write rates CouchDB can
handle. The reports I've heard about load leads me to believe that
your numbers are a bit off, but I couldn't tell you without repeatable

Paul Davis

> Antony Blakey
> -------------
> CTO, Linkuistics Pty Ltd
> Ph: 0438 840 787
> All that is required for evil to triumph is that good men do nothing.

View raw message