cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cristian P (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13052) Repair process is violating the start/end token limits for small ranges
Date Wed, 04 Jan 2017 14:32:58 GMT


Cristian P commented on CASSANDRA-13052:

Hi Stefan,

In our PROD cluster (version 2.0.14) we saw large amount of data being streamed for a single
token range (order of GBs). When we run a repair for a full vnode range we see less data being
streamed (order of MBs).
I would suggest to log the ranges streamed for the above test. Worth doing after the call
to normalize.


> Repair process is violating the start/end token limits for small ranges
> -----------------------------------------------------------------------
>                 Key: CASSANDRA-13052
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>         Environment: We tried this in 2.0.14 and 3.9, same bug.
>            Reporter: Cristian P
>            Assignee: Stefan Podkowinski
>         Attachments: ccm_reproduce-13052.txt, system-dev-debug-13052.log
> We tried to do a single token repair by providing 2 consecutive token values for a large
column family. We soon notice heavy streaming and according to the logs the number of ranges
streamed was in thousands.
> After investigation we found a bug in the two partitioner classes we use (RandomPartitioner
and Murmur3Partitioner).
> The midpoint method used by MerkleTree.differenceHelper method to find ranges with differences
for streaming returns abnormal values (way out of the initial range requested for repair)
if the repair requested range is small (I expect smaller than 2^15).
> Here is the simple code to reproduce the bug for Murmur3Partitioner:
> Token left = new Murmur3Partitioner.LongToken(123456789L);
> Token right = new Murmur3Partitioner.LongToken(123456789L);
> IPartitioner partitioner = new Murmur3Partitioner();
> Token midpoint = partitioner.midpoint(left, right);
> System.out.println("Murmur3: [ " + left.getToken() + " : " + midpoint.getToken() + "
: " + right.getToken() + " ]");
> The output is:
> Murmur3: [ 123456789 : -9223372036731319019 : 123456789 ]
> Note that the midpoint token is nowhere near the suggested repair range. This will happen
if during the parsing of the tree (in MerkleTree.differenceHelper) in search for differences
 there isn't enough tokens for the split and the subrange becomes 0 (left.token=right.token)
as in the above test.

This message was sent by Atlassian JIRA

View raw message