couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Anderson <>
Subject Re: Preserving seq order through replication
Date Fri, 05 Mar 2010 22:01:10 GMT
On Fri, Mar 5, 2010 at 11:27 AM, Adam Kocoloski <> wrote:
> On Mar 5, 2010, at 2:13 PM, Randall Leeds wrote:
>> I believe replication right now sorts the incoming documents by seq
>> (comment says smth like 'just in case'), but then they are fetched
>> with some amount of concurrency and inserted as they arrive. Adam,
>> please chime in if I'm reading it wrong, as I think some of those
>> comments are yours.
> Yep, that's basically it.
>> On Fri, Mar 5, 2010 at 09:10, Adam Kocoloski <> wrote:
>>> With that said, making replication preserve the update order is probably not
very difficult or expensive to do.  Best,
>>> Adam
>> It could be a performance loss if couch_rep_writer had to buffer
>> writes to preserve insertion order. Alternatively, to prevent the
>> write queue from growing unboundedly if one document repeatedly fails,
>> couch_rep_reader could wait for a chunk of contiguous documents before
>> handing them over to the writer.
> If we were to do this, I'd implement it that 2nd way, where the reader only hands over
contiguous blocks to the writer, and doesn't "get too far ahead of itself"

I still think the whole notion is unnecessary complexity that creates
guarantees we'd rather not have. But I'm not gonna say don't write it.
It's just that if someone relies on this, we'll have to do extra work
to explain to them why their code broke when they scaled up.

_local_seq should be considered a smell (but sometimes a necessary one
for realtime apps...)

>> Concurrently fetching documents rather than just pipelining them on
>> one http connection might not seem beneficial at first glance, but a
>> source which has disks that can service concurrent reads stands to
>> benefit. When the source can't actually do this it's up to the
>> OS/FS/Disk to order reads in a way that is as optimal as possible.
> +1
> Adam

Chris Anderson

View raw message