couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Filipe Manana (JIRA)" <>
Subject [jira] Updated: (COUCHDB-1080) fail fast with checkpoint conflicts
Date Fri, 04 Mar 2011 15:19:36 GMT


Filipe Manana updated COUCHDB-1080:

    Attachment: COUCHDB-1080-4-fdmanana.patch

Thanks Randall. It was well spotted this issue, which affects both replicators.

I made further changes to be even more explicit when there's an error updating one of the
checkpoints. It no longer bad matches when it fails to update a checkpoint and it sends a
message like the following back to the client:

{"error":"checkpoint_commit_failure","reason":"Error updating the source checkpoint document:

Also added the following clause:

+    {_NewSrcInstanceStartTime, _NewTgtInstanceStartTime} ->
+        {checkpoint_commit_failure, <<"Source and target databases out of "
+            "sync. Try to increase max_dbs_open at both servers.">>}

Maybe very unlikely to happen, but one never knows :)

I'll commit this 4th patch later today or tomorrow if no complaints are raised.

thanks again

> fail fast with checkpoint conflicts
> -----------------------------------
>                 Key: COUCHDB-1080
>                 URL:
>             Project: CouchDB
>          Issue Type: Improvement
>          Components: Replication
>    Affects Versions: 1.0.2
>            Reporter: Randall Leeds
>             Fix For: 1.1, 1.2
>         Attachments: COUCHDB-1080-2-fdmanana.patch, COUCHDB-1080-3-fdmanana.patch, COUCHDB-1080-4-fdmanana.patch,
COUCHDB-1080-fdmanana.patch, paranoid_checkpoint_failure.patch, paranoid_checkpoint_failure_v2.patch
> I've thought about this long and hard and probably should have submitted the bug a long
time ago. I've also run this in production for months.
> When a checkpoint conflict occurs it is almost always the right thing to do to abort.
> If there is a rev mismatch it could mean there's are two conflicting (continuous and
one-shot) replications between the same hosts running. Without reloading the history documents
checkpoints will continue to fail forever. This could leave us in a state with many replicated
changes but no checkpoints.
> Similarly, a successful checkpoint but a lost/timed-out response could cause this situation.
> Since the supervisor will restart the replication anyway, I think it's safer to abort
and retry.

This message is automatically generated by JIRA.
For more information on JIRA, see:


View raw message