Issue created at https://issues.apache.org/jira/browse/CASSANDRA-6673
I'm seeing the same with cassandra-2.0.4 during compaction, after lot of sstable files are streamed after bootstrap/repair. Strange thing is, the 'Last written
key >= current key' exception during compaction of L0, L1 sstables, goes away after restarting cassandra. But, then see those warnings about overlapping sstables.
I think this change in https://issues.apache.org/jira/browse/CASSANDRA-5921 is causing overlapping of sstables in L1. Didn't used to see this with cassandra-1.2.9 which had https://issues.apache.org/jira/browse/CASSANDRA-5907 fixed. Can you open a jira reporting this issue?
On Thursday, February 6, 2014 4:31 AM, "Desimpel, Ignace" <Ignace.Desimpel@nuance.com> wrote:
Also, these nodes and data are entirely created by a 2.0.4 code, so should not really be a 1.1.x related bug.
Also, I restarted the whole test, thus completely new database, and I get similar problems.
The join with auto bootstrap itself was finished. So I restarted the added node. During restart I saw a message indicating that something is wrong about this row and sstable.
Of course, in my case I did not drop sstable from another node. But I did decommission and add the node, so that is still a kind of ‘data-from-another-node’.
At level 2, SSTableReader(path='../../../../data/cdi.cassandra.cdi/dbdatafile/Ks100K/ForwardStringFunction/Ks100K-ForwardStringFunction-jb-67-Data.db') [DecoratedKey(065864ce01024e4e505300, 065864ce01024e4e505300), DecoratedKey(14c9d35e0102646973706f736974696f6e7300, 14c9d35e0102646973706f736974696f6e7300)] overlaps SSTableReader(path='../../../../data/cdi.cassandra.cdi/dbdatafile/Ks100K/ForwardStringFunction/Ks100K-ForwardStringFunction-jb-64-Data.db') [DecoratedKey(068c2e4101024d6f64616c207665726200, 068c2e4101024d6f64616c207665726200), DecoratedKey(06c566b4010244657465726d696e657200, 06c566b4010244657465726d696e657200)]. This could be caused by a bug in Cassandra 1.1.0 .. 1.1.3 or due to the fact that you have dropped sstables from another node into the data directory. Sending back to L0. If you didn't drop in sstables, and have not yet run scrub, you should do so since you may also have rows out-of-order within an sstable
4 node, byte ordered, LCS, 3 Compaction Executors, replication factor 1
Code is 2.0.4 version but with patch for CASSANDRA-6638 However, no cleanup is run so patch should not play a roll
4 node cluster is started and insert/queries are done up to about only 10 GB of data on each node.
Then decommission one node, and delete local files.
Then add node again.
Exception : see below.