cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcus Eriksson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-8911) Consider Mutation-based Repairs
Date Mon, 08 Aug 2016 08:48:21 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411511#comment-15411511
] 

Marcus Eriksson commented on CASSANDRA-8911:
--------------------------------------------

Thanks for the early review

bq. I don't entirely get why we need a separate HUGE response, can't we just page back the
data from the mismatching replica in the MBRResponse similar to ordinary responses?
There are two reasons, first one is that we save one round trip during repair of a page. Second
reason is that it is easier do diff the remote data with the local data if we are not paging.
(ie, we know we are comparing the entire result, not just a page of it)
bq. While it's handy to have a rows/s metric for repaired rows, I think it might be more useful
for operators to have a repair throttle knob in MB/S (in line with compaction/stream throughput)
rather than rows/s, since the load imposed by a repairing row can vary between different tables,
so it might be a bit trickier to do capacity planning based on rows/s.
I have not focused on metrics at all yet, and yes we should add more.
bq. Currently we're throttling only at the coordinator, but it would probably be interesting
to use the same rate limiter to also throttle participant reads/writes.
hmm, maybe makes sense to only throttle on the replicas
bq. Can we make MBROnHeapUnfilteredPartitions a lazy iterator that caches rows while its traversed?
good point, will fix
bq. Also, can we leverage MBROnHeapUnfilteredPartitions (or similar) to cache iterated rows
on MBRVerbHandler?
yep
bq. Try to generify unfiltered and filtered methods/classes into common classes/methods when
possible (UnfilteredPager, UnfilteredPagersIterator, getUnfilteredRangeSlice, etc) (this is
probably on your TODO but just a friendly reminder)
yeah, a bunch of todos in the code around this
bq. Right now the service runs a full repair for a table which probably makes sense for triggered
repairs, but when we make that continuous we will probably need to interleave subrange repair
of different tables to ensure fairness (so big tables don't starvate small table repairs).
Yes, splitting the local ranges in smaller parts based on the size of the table, then sequentially
repairing each part probably makes sense
bq. While repairing other replicas right away will probably get nodes consistent faster, I
wonder if we can make any simplification by repairing only the coordinator node but since
the diffs are already there I don't see any reason not to do it.
yes we should repair the replicas as well since we know what the replicas need to be in sync
- also, for incremental repair a range is only repaired if we know that the replicas are in
sync


> Consider Mutation-based Repairs
> -------------------------------
>
>                 Key: CASSANDRA-8911
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8911
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: Tyler Hobbs
>            Assignee: Marcus Eriksson
>             Fix For: 3.x
>
>
> We should consider a mutation-based repair to replace the existing streaming repair.
 While we're at it, we could do away with a lot of the complexity around merkle trees.
> I have not planned this out in detail, but here's roughly what I'm thinking:
>  * Instead of building an entire merkle tree up front, just send the "leaves" one-by-one.
 Instead of dealing with token ranges, make the leaves primary key ranges.  The PK ranges
would need to be contiguous, so that the start of each range would match the end of the previous
range. (The first and last leaves would need to be open-ended on one end of the PK range.)
This would be similar to doing a read with paging.
>  * Once one page of data is read, compute a hash of it and send it to the other replicas
along with the PK range that it covers and a row count.
>  * When the replicas receive the hash, the perform a read over the same PK range (using
a LIMIT of the row count + 1) and compare hashes (unless the row counts don't match, in which
case this can be skipped).
>  * If there is a mismatch, the replica will send a mutation covering that page's worth
of data (ignoring the row count this time) to the source node.
> Here are the advantages that I can think of:
>  * With the current repair behavior of streaming, vnode-enabled clusters may need to
stream hundreds of small SSTables.  This results in increased compact
> ion load on the receiving node.  With the mutation-based approach, memtables would naturally
merge these.
>  * It's simple to throttle.  For example, you could give a number of rows/sec that should
be repaired.
>  * It's easy to see what PK range has been repaired so far.  This could make it simpler
to resume a repair that fails midway.
>  * Inconsistencies start to be repaired almost right away.
>  * Less special code \(?\)
>  * Wide partitions are no longer a problem.
> There are a few problems I can think of:
>  * Counters.  I don't know if this can be made safe, or if they need to be skipped.
>  * To support incremental repair, we need to be able to read from only repaired sstables.
 Probably not too difficult to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message