cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Benedict (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.
Date Tue, 22 Apr 2014 17:48:20 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13977112#comment-13977112
] 

Benedict commented on CASSANDRA-6696:
-------------------------------------

bq. if a new node "steals" from a range that intersects disk X but not disk Y, you're going
to end up with more imbalance post-bootstrap than you had before.

Sure, it will steal an amount, but if the allocation of new vnodes ensures that any stealing
happens equally distributed across the cluster then while any single node will cause an imbalance,
the total imbalance of the cluster is kept bounded throughout an arbitrary number of node
additions. So that you never get perfection, but you're never far from it either. The basic
idea is that while you cannot easily guarantee the size of any single vnode, you _can_ guarantee
that if you collect any N _adjacent_ vnodes together that their total owned range is within
some proportion of the ideal. As N grows the proximity to perfect increases.

bq. there is a finite cap on the amount of work needed to be performed per node addition

Sure, but that's a reasonably large cap - for all clusters with fewer than 256 nodes my statement
holds true



> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>             Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one
and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for corrupt
stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.

> Then a delete/tombstone was written successfully for the same row column 15 days back.

> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it
got compacted with the actual data. So there is no trace of this row column in node A and
B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction
has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come
back to life. 
> Now after replacing the drive we run repair. This data will be propagated to all nodes.

> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message