incubator-couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Randall Leeds (JIRA)" <j...@apache.org>
Subject [jira] Commented: (COUCHDB-1080) fail fast with checkpoint conflicts
Date Thu, 03 Mar 2011 16:39:36 GMT

    [ https://issues.apache.org/jira/browse/COUCHDB-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002055#comment-13002055
] 

Randall Leeds commented on COUCHDB-1080:
----------------------------------------

With your patch, commit_to_both can return errors. Before it just threw badmatch exceptions.
The old way, the only path to the case clause with my log message was when a successful _ensure_full_commit
happened but the instance_start_time did not match (source or target crashed or closed, may
have lost some changes we replicated, so we can't checkpoint).

I like the way you changed it, but I also like the helpful error message I added. Maybe now
we can have four case clauses in do_checkpoint and that way we can be very clear to the user
about what went wrong? Does this make sense to you now?

> fail fast with checkpoint conflicts
> -----------------------------------
>
>                 Key: COUCHDB-1080
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-1080
>             Project: CouchDB
>          Issue Type: Improvement
>          Components: Replication
>    Affects Versions: 1.0.2
>            Reporter: Randall Leeds
>             Fix For: 1.1, 1.2
>
>         Attachments: COUCHDB-1080-fdmanana.patch, paranoid_checkpoint_failure.patch,
paranoid_checkpoint_failure_v2.patch
>
>
> I've thought about this long and hard and probably should have submitted the bug a long
time ago. I've also run this in production for months.
> When a checkpoint conflict occurs it is almost always the right thing to do to abort.
> If there is a rev mismatch it could mean there's are two conflicting (continuous and
one-shot) replications between the same hosts running. Without reloading the history documents
checkpoints will continue to fail forever. This could leave us in a state with many replicated
changes but no checkpoints.
> Similarly, a successful checkpoint but a lost/timed-out response could cause this situation.
> Since the supervisor will restart the replication anyway, I think it's safer to abort
and retry.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message