couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Filipe Manana (JIRA)" <j...@apache.org>
Subject [jira] Commented: (COUCHDB-1080) fail fast with checkpoint conflicts
Date Thu, 03 Mar 2011 14:20:39 GMT

    [ https://issues.apache.org/jira/browse/COUCHDB-1080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13002000#comment-13002000
] 

Filipe Manana commented on COUCHDB-1080:
----------------------------------------

Randall,

+        ?LOG_ERROR("checkpoint failure: a database was closed (replication "
+               "count exceeds max_dbs_open?)", []),
+        exit({checkpoint_commit_failure, database_closed})

I still don't think this log message is right. The call to _ensure_full_commit might fail
for several reasons, specially for remote databases. Why do you assume the database was closed,
and completely ignore the failure reason? I think the right thing to do is exiting with {checkpoint_commit_failure,
Reason}

A minor nitpick here:

+    RepInfo = io_lib:format("replication `~s` (`~s` -> `~s`)",
+        [BaseId ++ Ext, Rep#rep_state.source_name, Rep#rep_state.target_name]),
+    ?LOG_ERROR("~s failed: ~p", [lists:flatten(RepInfo), Reason]),

You don't need to flatten. All the "io:format" family functions accept IOLists for string
place holders. Plus, the logger module is converting it to a binary anyway (before sending
it to the couch_log gen_event).

I attach here an alternative patch. Let me know what you think.

cheers

> fail fast with checkpoint conflicts
> -----------------------------------
>
>                 Key: COUCHDB-1080
>                 URL: https://issues.apache.org/jira/browse/COUCHDB-1080
>             Project: CouchDB
>          Issue Type: Improvement
>          Components: Replication
>    Affects Versions: 1.0.2
>            Reporter: Randall Leeds
>             Fix For: 1.1, 1.2
>
>         Attachments: COUCHDB-1080-fdmanana.patch, paranoid_checkpoint_failure.patch,
paranoid_checkpoint_failure_v2.patch
>
>
> I've thought about this long and hard and probably should have submitted the bug a long
time ago. I've also run this in production for months.
> When a checkpoint conflict occurs it is almost always the right thing to do to abort.
> If there is a rev mismatch it could mean there's are two conflicting (continuous and
one-shot) replications between the same hosts running. Without reloading the history documents
checkpoints will continue to fail forever. This could leave us in a state with many replicated
changes but no checkpoints.
> Similarly, a successful checkpoint but a lost/timed-out response could cause this situation.
> Since the supervisor will restart the replication anyway, I think it's safer to abort
and retry.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message