incubator-couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Randall Leeds (JIRA)" <j...@apache.org>
Subject [jira] Updated: (COUCHDB-1080) fail fast with checkpoint conflicts
Date Thu, 03 Mar 2011 03:07:37 GMT

     [ https://issues.apache.org/jira/browse/COUCHDB-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Randall Leeds updated COUCHDB-1080:
-----------------------------------

    Attachment: paranoid_checkpoint_failure_v2.patch

Thanks for the feedback, Filipe.
This version I clarified the new log message to offer a suggestion about why it occurred as
well.

The failure reason, along with the replication info, now get logged in one place in terminate/2.
It should be easy for users to match up the first error (with suggestion about what to fix)
with the replication that failed for the same reason (logged in terminate).

No suggestion is made at error time that the replication might restart. When the supervisor
restarts the dead replication it will log that at INFO (couch_replicator.erl#L127).

How's this look to you?

> fail fast with checkpoint conflicts
> -----------------------------------
>
>                 Key: COUCHDB-1080
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-1080
>             Project: CouchDB
>          Issue Type: Improvement
>          Components: Replication
>    Affects Versions: 1.0.2
>            Reporter: Randall Leeds
>             Fix For: 1.1, 1.2
>
>         Attachments: paranoid_checkpoint_failure.patch, paranoid_checkpoint_failure_v2.patch
>
>
> I've thought about this long and hard and probably should have submitted the bug a long
time ago. I've also run this in production for months.
> When a checkpoint conflict occurs it is almost always the right thing to do to abort.
> If there is a rev mismatch it could mean there's are two conflicting (continuous and
one-shot) replications between the same hosts running. Without reloading the history documents
checkpoints will continue to fail forever. This could leave us in a state with many replicated
changes but no checkpoints.
> Similarly, a successful checkpoint but a lost/timed-out response could cause this situation.
> Since the supervisor will restart the replication anyway, I think it's safer to abort
and retry.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message