cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benjamin Roth (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-12489) consecutive repairs of same range always finds 'out of sync' in sane cluster
Date Mon, 06 Mar 2017 20:43:33 GMT


Benjamin Roth commented on CASSANDRA-12489:

Thanks for the answer. Thats what I thought. But what a right to exist do incremental repairs
then have in the real world if (most, many, whatever) people use a tool that makes repairs
manageable which eliminates this case. The use case + real benefit is quite limited then,
isn't it?
Probably thats a philosophic question but I'm curios what other guys think about it and if
I am maybe missing a valuable use case.

> consecutive repairs of same range always finds 'out of sync' in sane cluster
> ----------------------------------------------------------------------------
>                 Key: CASSANDRA-12489
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Streaming and Messaging
>            Reporter: Benjamin Roth
>            Assignee: Benjamin Roth
>              Labels: lhf
>         Attachments: trace_3_10.1.log.gz, trace_3_10.2.log.gz, trace_3_10.3.log.gz, trace_3_10.4.log.gz,
trace_3_9.1.log.gz, trace_3_9.2.log.gz
> No matter how often or when I run the same subrange repair, it ALWAYS tells me that some
ranges are our of sync. Tested in 3.9 + 3.10 (git trunk of 2016-08-17). The cluster is sane.
All nodes are up, cluster is not overloaded.
> I guess this is not a desired behaviour. I'd expect that a repair does what it says and
a consecutive repair shouldn't report "out of syncs" any more if the cluster is sane.
> Especially for tables with MVs that puts a lot of pressure during repair as ranges are
repaired over and over again.
> See traces of different runs attached.

This message was sent by Atlassian JIRA

View raw message