couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Shorin <>
Subject Re: NPM, CouchDB and big attachments
Date Wed, 27 Nov 2013 12:26:13 GMT
On Wed, Nov 27, 2013 at 3:59 PM, Robert Newson <> wrote:
> Particularly, we could make
> attachment replication resumable. Currently, if we replicate 99.9% of
> a large attachment, lose our connection, and resume, we'll start over
> from byte 0. This is why, elsewhere, there's a suggestion of 'one
> attachment per document'. That is a horrible and artificial constraint
> just to work around replicator deficiencies. We should encourage sane
> design (related attachments together in the same document) and fix the
> bugs that prevent heavy users from following it.

I think the key issue there is in missing some semi-persistent buffer
on the other side that could be used as temporary buffer for already
received data. In this case replicator may use Range header to send
only missed attachment chunks to Target (since doc and other bit are
already there in the buffer). When every bit had been sent
successfully, doc and his attachments moves from this buffer to the
target database (or been deleted after some timeout). But this isn't a
good solution, right?


View raw message