incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yang <teddyyyy...@gmail.com>
Subject Re: Re: Repair question - why is so much data transferred?
Date Thu, 21 Jul 2011 19:31:29 GMT
I have been thinking about the problem of repair for a while.

if we do not consider the need for partition-tolerance, then the
eventual consistency approach is probably the ultimate reason for
needing repairs: compared to Zookeeper/Spinnaker (recent VLDB
paper)/Chubby/HBase, those systems only need to bring up a node to
date at the *end* of write history, cuz everyone's write history forms
a prefix of the real history; but Dynamo-systems unnecessarily creates
many "holes" in history because any writes can be missed, as a result
you have to do the expensive scan for repair.  in other words, by
design, those other systems can find out the discrepancies at zero
cost, while dynamo systems needs to regenerate the expensive merkle
tree.


I've been thinking about implementing the Zookeeper protocol for some
optional CFs that want to use HBase-style replication (single write
point/master within replica set, with master being leader-elected),
this would be similar to Spinnaker except that we do not actually use
ZK (relying on external disconnection notification has some rare
chances of master conflict, plus the extra component dependency. with
the sending/acking traffic patterns already in Cassandra, it's
actually easier to add the ZAB protocol directly).   this way no
repair would be needed for such CFs.


yang

On Thu, Jul 21, 2011 at 8:43 AM,  <jonathan.colby@gmail.com> wrote:
> from ticket 2818:
> "One (reasonably simple) proposition to fix this would be to have repair
> schedule validation compactions across nodes one by one (i.e, one CF/range
> at a time), waiting for all nodes to return their tree before submitting the
> next request. Then on each node, we should make sure that the node will
> start the validation compaction as soon as requested. For that, we probably
> want to have a specific executor for validation compaction"
>
> .. This was the way I thought repair worked.
>
> Anyway, in our case, we only have one CF, so I'm not sure if both issues
> apply to my situation.
>
> Thanks. Looking forward to the release where these 2 things are fixed.
>
> On , Jonathan Ellis <jbellis@gmail.com> wrote:
>> On Thu, Jul 21, 2011 at 9:14 AM, Jonathan Colby
>>
>> jonathan.colby@gmail.com> wrote:
>>
>> > I regularly run repair on my cassandra cluster.   However, I often seen
>> > that during the repair operation very large amounts of data are transferred
>> > to other nodes.
>>
>>
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2280
>>
>> https://issues.apache.org/jira/browse/CASSANDRA-2816
>>
>>
>>
>> > My questions is, if only some data is out of sync,  why are entire Data
>> > files being transferred?
>>
>>
>>
>> Repair streams ranges of files as a unit (which becomes a new file on
>>
>> the target node) rather than using the normal write path.
>>
>>
>>
>> --
>>
>> Jonathan Ellis
>>
>> Project Chair, Apache Cassandra
>>
>> co-founder of DataStax, the source for professional Cassandra support
>>
>> http://www.datastax.com
>>

Mime
View raw message