couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <rnew...@apache.org>
Subject Re: _replication database pruning
Date Thu, 19 Jan 2012 11:16:14 GMT
In a word, "no".

In more words: Replication comes in several flavors now.

The original flavor, Vanilla, was a one-off replication that ensured
the target had all leaf revisions of all documents at the source from
the time you started replication.

The next flavor we added, Double Chocolate Chip, (activated by
"continuous":true, for obscure reasons) is similar to the first but
continues to send updates on the source to the target until one of
them crashes, the link is lost, there is an eclipse, and when the tide
goes out.

The newest flavor, Coconut Mocha Chocolate Fudge Extravaganza, can
work in either way but is persisted. You can choose this flavor by
saving the same values you would POST to /_replicate as a document in
the special _replicator database.

If you don't want a record of the replication, don't ask for one. A
one-off replication, once complete, will not run again, doing so via
the _replicator database is your way of asking for a durable receipt.

That made me hungry, biab.

B.

On 19 January 2012 11:02, Steven Ringo <google@stevenringo.com> wrote:
> This is the recommended way of starting a replication no?
>
> curl -H 'Content-Type: application/json' -X POST
> http://localhost:5984/_replicator -d '{"source":
> "http://127.0.0.1:5984/foo", "target": "http://127.0.0.1:5984/bar"}'
>
> It also is the way the library I am using (CouchCocoa) does it.
>
> I realise one can post to /_replicate (as opposed to _replicator), but then
> there's no recovery if the database is restarted, right?
>
> What else should I be using?
>
>
>
>
>
> Robert Newson wrote:
>>
>> Why are you using the _replicator db for one-off replications in the
>> first place? :)
>>
>> B.
>>
>> On 19 January 2012 04:07, Steven Ringo<google@stevenringo.com>  wrote:
>>>
>>> I notice the _replication database fills up with entries as one-off
>>> replications take place. In the case of many replications being fired off
>>> (e.g. filtered per-user) the database may end up getting large.
>>>
>>> Like log file rotation, etc. is there a way to automatically prune or do
>>> something to keep it cleaned?
>>>
>>> Thanks,
>>>
>>> Steve
>>>
>>>
>

Mime
View raw message