couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Filipe Manana (JIRA)" <>
Subject [jira] Updated: (COUCHDB-1080) fail fast with checkpoint conflicts
Date Thu, 03 Mar 2011 19:26:37 GMT


Filipe Manana updated COUCHDB-1080:

    Attachment: COUCHDB-1009-3.patch

Ok I just tested the patch, under error conditions, and the following caused a nasty badarg
stack trace

+    RepInfo = io_lib:format("replication `~s` (`~s` -> `~s`)",
+        [BaseId ++ Ext, Rep#rep_state.source_name, Rep#rep_state.target_name]),

Rep is not a #rep_state record but it's a #rep record.

I also changed the way to terminate the process. Instead of calling exit(Reason) now it terminates
by returning {stop, Reason, State}, which is more OTPish.

The error sent in the JSON body to the client when one of the databases was re-opened is like

{"error":"checkpoint_commit_failure","reason":"Source database out of sync. Try to increase
max_dbs_open at the source's server."}


> fail fast with checkpoint conflicts
> -----------------------------------
>                 Key: COUCHDB-1080
>                 URL:
>             Project: CouchDB
>          Issue Type: Improvement
>          Components: Replication
>    Affects Versions: 1.0.2
>            Reporter: Randall Leeds
>             Fix For: 1.1, 1.2
>         Attachments: COUCHDB-1009-3.patch, COUCHDB-1080-2-fdmanana.patch, COUCHDB-1080-fdmanana.patch,
paranoid_checkpoint_failure.patch, paranoid_checkpoint_failure_v2.patch
> I've thought about this long and hard and probably should have submitted the bug a long
time ago. I've also run this in production for months.
> When a checkpoint conflict occurs it is almost always the right thing to do to abort.
> If there is a rev mismatch it could mean there's are two conflicting (continuous and
one-shot) replications between the same hosts running. Without reloading the history documents
checkpoints will continue to fail forever. This could leave us in a state with many replicated
changes but no checkpoints.
> Similarly, a successful checkpoint but a lost/timed-out response could cause this situation.
> Since the supervisor will restart the replication anyway, I think it's safer to abort
and retry.

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message