cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fabien Rousseau (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-11349) MerkleTree mismatch when multiple range tombstones exists for the same partition and interval
Date Tue, 17 May 2016 21:54:13 GMT


Fabien Rousseau commented on CASSANDRA-11349:

Ok, this seems a better approach (with this approach RT are added to the tracker through the
ColumnIndex.add method).
I had some time to test it on a dev environment and repair did not find any difference (which
is a good thing).

Regarding the case that is not working correctly, I think the solution is to use a RangeTombstoneList
before writing RangeTombstones.

The current implementation Tracker.writeUnwrittenTombstones(...) is:
            for (RangeTombstone rt : unwrittenTombstones)
                size += writeTombstone(rt, out, atomSerializer);
And should be replaced by:
            RangeTombstoneList rtl = new RangeTombstoneList(comparator, unwrittenTombstones.size());
            for (RangeTombstone rt : unwrittenTombstones)
            for (RangeTombstone rt : rtl)
                size += writeTombstone(rt, out, atomSerializer);
I haven't tested this but it should work.
The explanation for this is the following:
 - on node1, due to the flushes, each RT is written in its own SSTable
 - on node2, because all RTs are kept in memory, they're kept in a RangeTombstoneList. This
RangeTombstoneList will keep non overlapping RTs.

During repair, on node1, RTs are merged but are kept as is (ie some RTs can be overlapped)
while on node2, they can't.
By using the RangeTombstoneList before serializing the unwritten RT, no RT can overlap another

Note: doing the change above will also change the way RT are serialized during normal compactions...

> MerkleTree mismatch when multiple range tombstones exists for the same partition and
> ---------------------------------------------------------------------------------------------
>                 Key: CASSANDRA-11349
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Fabien Rousseau
>            Assignee: Stefan Podkowinski
>              Labels: repair
>             Fix For: 2.1.x, 2.2.x
>         Attachments: 11349-2.1-v2.patch, 11349-2.1-v3.patch, 11349-2.1.patch
> We observed that repair, for some of our clusters, streamed a lot of data and many partitions
were "out of sync".
> Moreover, the read repair mismatch ratio is around 3% on those clusters, which is really
> After investigation, it appears that, if two range tombstones exists for a partition
for the same range/interval, they're both included in the merkle tree computation.
> But, if for some reason, on another node, the two range tombstones were already compacted
into a single range tombstone, this will result in a merkle tree difference.
> Currently, this is clearly bad because MerkleTree differences are dependent on compactions
(and if a partition is deleted and created multiple times, the only way to ensure that repair
"works correctly"/"don't overstream data" is to major compact before each repair... which
is not really feasible).
> Below is a list of steps allowing to easily reproduce this case:
> {noformat}
> ccm create test -v 2.1.13 -n 2 -s
> ccm node1 cqlsh
> CREATE KEYSPACE test_rt WITH replication = {'class': 'SimpleStrategy', 'replication_factor':
> USE test_rt;
>     c1 text,
>     c2 text,
>     c3 float,
>     c4 float,
>     PRIMARY KEY ((c1), c2)
> );
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 2);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> # now flush only one of the two nodes
> ccm node1 flush 
> ccm node1 cqlsh
> USE test_rt;
> INSERT INTO table1 (c1, c2, c3, c4) VALUES ( 'a', 'b', 1, 3);
> DELETE FROM table1 WHERE c1 = 'a' AND c2 = 'b';
> ctrl ^d
> ccm node1 repair
> # now grep the log and observe that there was some inconstencies detected between nodes
(while it shouldn't have detected any)
> ccm node1 showlog | grep "out of sync"
> {noformat}
> Consequences of this are a costly repair, accumulating many small SSTables (up to thousands
for a rather short period of time when using VNodes, the time for compaction to absorb those
small files), but also an increased size on disk.

This message was sent by Atlassian JIRA

View raw message