cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wei Deng (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CASSANDRA-6696) Partition sstables by token range
Date Thu, 21 Jul 2016 01:20:21 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Wei Deng updated CASSANDRA-6696:
--------------------------------
    Labels: compaction correctness dense-storage doc-impacting jbod-aware-compaction lcs performance
 (was: compaction correctness dense-storage doc-impacting jbod-aware-compaction performance)

> Partition sstables by token range
> ---------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>              Labels: compaction, correctness, dense-storage, doc-impacting, jbod-aware-compaction,
lcs, performance
>             Fix For: 3.2, 3.3
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one
and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for corrupt
stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.

> Then a delete/tombstone was written successfully for the same row column 15 days back.

> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it
got compacted with the actual data. So there is no trace of this row column in node A and
B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction
has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come
back to life. 
> Now after replacing the drive we run repair. This data will be propagated to all nodes.

> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message