cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Podkowinski (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-13052) Repair process is violating the start/end token limits for small ranges
Date Tue, 03 Jan 2017 15:20:58 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15795285#comment-15795285
] 

Stefan Podkowinski commented on CASSANDRA-13052:
------------------------------------------------

{quote}
We soon notice heavy streaming and according to the logs the number of ranges streamed was
in thousands.
{quote}

[~cmposto], I'm currently a bit confused while trying to evaluate the actual effect of this
behaviour. I was first a bit concerned about either having ranges streamed that shouldn't
be repaired or to send identical ranges thousand of times. But I've come to the conclusion
that none of this should happen, as {{StreamSession.addTransferRanges()}} should normalize
all redundant ranges into a singe range, before starting to stream any files. Can you further
describe how this bug resulted into "heavy streaming" on your cluster? What was is that you
noticed before further looking into this issue?


> Repair process is violating the start/end token limits for small ranges
> -----------------------------------------------------------------------
>
>                 Key: CASSANDRA-13052
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-13052
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>         Environment: We tried this in 2.0.14 and 3.9, same bug.
>            Reporter: Cristian P
>            Assignee: Stefan Podkowinski
>         Attachments: ccm_reproduce-13052.txt, system-dev-debug-13052.log
>
>
> We tried to do a single token repair by providing 2 consecutive token values for a large
column family. We soon notice heavy streaming and according to the logs the number of ranges
streamed was in thousands.
> After investigation we found a bug in the two partitioner classes we use (RandomPartitioner
and Murmur3Partitioner).
> The midpoint method used by MerkleTree.differenceHelper method to find ranges with differences
for streaming returns abnormal values (way out of the initial range requested for repair)
if the repair requested range is small (I expect smaller than 2^15).
> Here is the simple code to reproduce the bug for Murmur3Partitioner:
> Token left = new Murmur3Partitioner.LongToken(123456789L);
> Token right = new Murmur3Partitioner.LongToken(123456789L);
> IPartitioner partitioner = new Murmur3Partitioner();
> Token midpoint = partitioner.midpoint(left, right);
> System.out.println("Murmur3: [ " + left.getToken() + " : " + midpoint.getToken() + "
: " + right.getToken() + " ]");
> The output is:
> Murmur3: [ 123456789 : -9223372036731319019 : 123456789 ]
> Note that the midpoint token is nowhere near the suggested repair range. This will happen
if during the parsing of the tree (in MerkleTree.differenceHelper) in search for differences
 there isn't enough tokens for the split and the subrange becomes 0 (left.token=right.token)
as in the above test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message