cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuki Morishita (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-6696) Partition sstables by token range
Date Mon, 04 Jan 2016 23:40:40 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15081526#comment-15081526
] 

Yuki Morishita edited comment on CASSANDRA-6696 at 1/4/16 11:40 PM:
--------------------------------------------------------------------

[~krummas] I still prefer just returning 'keyspace name/table name pair' in {{RangeAwareSSTableWriter#getFilename}}
over adding UUID to {{ProgressInfo}}. Even with ID, {{nodetool netstats}} will still show
constantly changing file name with inaccurate bytes.
My suggested change is [here|https://github.com/krummas/cassandra/pull/2].
 -{{SSTableMultiWriter#getFilename}} is also used in debug log when complete flushing SSTable(s),
and because {{RangeAwareSSTableWriter}} can write SSTables when flushing, I think displaying
just ks/table name there too is not confusing than displaying only last written file name.-
(edit: This looks like no problem here, my bad)


was (Author: yukim):
[~krummas] I still prefer just returning 'keyspace name/table name pair' in {{RangeAwareSSTableWriter#getFilename}}
over adding UUID to {{ProgressInfo}}. Even with ID, {{nodetool netstats}} will still show
constantly changing file name with inaccurate bytes. {{SSTableMultiWriter#getFilename}} is
also used in debug log when complete flushing SSTable(s), and because {{RangeAwareSSTableWriter}}
can write SSTables when flushing, I think displaying just ks/table name there too is not confusing
than displaying only last written file name.

> Partition sstables by token range
> ---------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>              Labels: compaction, correctness, dense-storage, jbod-aware-compaction, performance
>             Fix For: 3.2
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new empty one
and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for corrupt
stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all three nodes.

> Then a delete/tombstone was written successfully for the same row column 15 days back.

> Since this tombstone is more than gc grace, it got compacted in Nodes A and B since it
got compacted with the actual data. So there is no trace of this row column in node A and
B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. Compaction
has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp has come
back to life. 
> Now after replacing the drive we run repair. This data will be propagated to all nodes.

> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message