cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Witschey (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-9194) Delete-only workloads crash Cassandra
Date Mon, 20 Apr 2015 16:08:02 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14503090#comment-14503090
] 

Jim Witschey edited comment on CASSANDRA-9194 at 4/20/15 4:07 PM:
------------------------------------------------------------------

I'm afraid I don't follow.

bq. This is already behaving properly in 2.1

[The results of the dtest|https://github.com/riptano/cassandra-dtest/blob/master/deletion_test.py#L44]
show that the tracked memtable size is 0 in 2.1 after performing 100 deletions -- {{MemtableLiveDataSize}}
is reported as 0 over JMX even when {{MemtableColumnsCount}} is 100. Is that behavior correct?
I may not have been clear, but that test fails on all released 2.0 and 2.1 versions.

Also, I don't understand why the amount of memory to track for tombstones is arbitrary in
2.0.


was (Author: mambocab):
I'm afraid I don't follow.

.bq This is already behaving properly in 2.1

[The results of the dtest|https://github.com/riptano/cassandra-dtest/blob/master/deletion_test.py#L44]
show that the tracked memtable size is 0 in 2.1 after performing 100 deletions -- {{MemtableLiveDataSize}}
is reported as 0 over JMX even when {{MemtableColumnsCount}} is 100. Is that behavior correct?
I may not have been clear, but that test fails on all released 2.0 and 2.1 versions.

Also, I don't understand why the amount of memory to track for tombstones is arbitrary in
2.0.

> Delete-only workloads crash Cassandra
> -------------------------------------
>
>                 Key: CASSANDRA-9194
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9194
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: 2.0.14
>            Reporter: Robert Wille
>            Assignee: Benedict
>             Fix For: 2.0.15
>
>         Attachments: 9194.txt
>
>
> The size of a tombstone is not properly accounted for in the memtable. A memtable which
has only tombstones will never get flushed. It will grow until the JVM runs out of memory.
The following program easily demonstrates the problem.
> {code}
> 		Cluster.Builder builder = Cluster.builder();
> 		
> 		Cluster c = builder.addContactPoints("cas121.devf3.com").build();
> 		
> 		Session s = c.connect();
> 			
> 		s.execute("CREATE KEYSPACE IF NOT EXISTS test WITH replication = { 'class' : 'SimpleStrategy',
'replication_factor' : 1 }");
> 		s.execute("CREATE TABLE IF NOT EXISTS test.test(id INT PRIMARY KEY)");
> 		PreparedStatement stmt = s.prepare("DELETE FROM test.test WHERE id = :id");
> 		int id = 0;
> 		
> 		while (true)
> 		{
> 			s.execute(stmt.bind(id));
> 			
> 			id++;
> 		}{code}
> This program should run forever, but eventually Cassandra runs out of heap and craps
out. You needn't wait for Cassandra to crash. If you run "nodetool cfstats test.test" while
it is running, you'll see Memtable cell count grow, but Memtable data size will remain 0.
> This issue was fixed once before. I received a patch for version 2.0.5 (I believe), which
contained the fix, but the fix has apparently been lost, because it is clearly broken, and
I don't see the fix in the change logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message