cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "sankalp kohli (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-6696) Drive replacement in JBOD can cause data to reappear.
Date Wed, 12 Feb 2014 17:20:24 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13899297#comment-13899297
] 

sankalp kohli commented on CASSANDRA-6696:
------------------------------------------

With this, the whole disk_failure_policy stuff is broken. If you blacklist a drive, you can
potentially bring data back to life. 

One of the fixes of this is one of my JIRA which I fixed long back. 
CASSANDRA-4784
If we divide each drive with ranges, then we are sure that the data along with the tombstone
will get blacklisted. 
Example: Say a node is handling range 1-10 and 11-20. We can have drive A handle 1-10 and
drive B handle 11-20. 
Thought this might have problems with load balancing. 

> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: sankalp kohli
>            Priority: Minor
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one
and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for corrupt
stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.

> Then a delete/tombstone was written successfully for the same row column 15 days back.

> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it
got compacted with the actual data. So there is no trace of this row column in node A and
B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction
has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come
back to life. 
> Now after replacing the drive we run repair. This data will be propagated to all nodes.

> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message