cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sam Tunnicliffe (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13987) Multithreaded commitlog subtly changed durability
Date Mon, 13 Nov 2017 20:35:00 GMT


Sam Tunnicliffe commented on CASSANDRA-13987:

Previously, {{writeCDCIndexFile}} was only called ever called after a flush, which would be
consistent with its comment that states:
{code}We persist the offset of the last data synced to disk so clients can parse only durable
data if they choose{code}
So currently this definition of durable would include durability in the face of host failures,
whereas with this patch the index file may contain offsets for segments that are durable under
process crash, but which have not yet been msynced/fsynced and so may not survive a host failure.
Should we move the call to {{writeCDCIndexFile}} into the {{if (flush || close)}} block, to
after the flush has completed?

That question aside, the code seems solid and I've manually tested both as-is and with some
added hacks to inject failures etc, but I feel like it could still benefit from some automated
testing to cover the new behaviour. I know that writing tests for this area is non-trivial
and usually involves byteman, but do you think it's worth adding a unit test or two for this?

* Typo in cassandra.yaml #380 s/mmaped/mmapped 
* The comment atop {{AbstractCommitLogSegmentManager::sync}} could use updating. The fact
that it says it flushes, but also takes a boolean flush arg is a bit confusing.
* {{CompressedSegment}} and {{EncryptedSegment}} no longer need to import {{SyncUtil}}

> Multithreaded commitlog subtly changed durability
> -------------------------------------------------
>                 Key: CASSANDRA-13987
>                 URL:
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Jason Brown
>            Assignee: Jason Brown
>             Fix For: 4.x
> When multithreaded commitlog was introduced in CASSANDRA-3578, we subtly changed the
way that commitlog durability worked. Everything still gets written to an mmap file. However,
not everything is replayable from the mmaped file after a process crash, in periodic mode.
> In brief, the reason this changesd is due to the chained markers that are required for
the multithreaded commit log. At each msync, we wait for outstanding mutations to serialize
into the commitlog, and update a marker before and after the commits that have accumluated
since the last sync. With those markers, we can safely replay that section of the commitlog.
Without the markers, we have no guarantee that the commits in that section were successfully
written, thus we abandon those commits on replay.
> If you have correlated process failures of multiple nodes at "nearly" the same time (see
["There Is No Now"|]), it is possible to have data
loss if none of the nodes msync the commitlog. For example, with RF=3, if quorum write succeeds
on two nodes (and we acknowledge the write back to the client), and then the process on both
nodes OOMs (say, due to reading the index for a 100GB partition), the write will be lost if
neither process msync'ed the commitlog. More exactly, the commitlog cannot be fully replayed.
The reason why this data is silently lost is due to the chained markers that were introduced
with CASSANDRA-3578.
> The problem we are addressing with this ticket is incrementally improving 'durability'
due to process crash, not host crash. (Note: operators should use batch mode to ensure greater
durability, but batch mode in it's current implementation is a) borked, and b) will burn through,
*very* rapidly, SSDs that don't have a non-volatile write cache sitting in front.) 
> The current default for {{commitlog_sync_period_in_ms}} is 10 seconds, which means that
a node could lose up to ten seconds of data due to process crash. The unfortunate thing is
that the data is still avaialble, in the mmap file, but we can't replay it due to incomplete
chained markers.
> ftr, I don't believe we've ever had a stated policy about commitlog durability wrt process
crash. Pre-2.0 we naturally piggy-backed off the memory mapped file and the fact that every
mutation was acquired a lock and wrote into the mmap buffer, and the ability to replay everything
out of it came for free. With CASSANDRA-3578, that was subtly changed. 
> Something [~jjirsa] pointed out to me is that [MySQL provides a way to adjust the durability
of each commit in innodb via the {{innodb_flush_log_at_trx_commit}}. I'm using that idea as
a loose springboard for what to do here.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message