cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Witschey (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-9194) Delete-only workloads crash Cassandra
Date Tue, 21 Apr 2015 23:17:00 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-9194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14505980#comment-14505980
] 

Jim Witschey edited comment on CASSANDRA-9194 at 4/21/15 11:16 PM:
-------------------------------------------------------------------

[~benedict] I think I understand now; thanks for the explanation.

I've [changed the test|https://github.com/mambocab/cassandra-dtest/commit/25ee5b7050e96a85cd4e33eadc41a21cec7da393]
so that it checks {{MemtableOnHeapSize}} for versions >= 2.1 and {{MemtableDataSize}} for
2.0. As you indicated, it passes on 2.1.4 and trunk. It currently fails on 2.0.4. Does that
sound right to you?

I can't seem to apply your patch to 2.0; it looks like it was written for 2.1? But I'm +1
on the same logic applied to 2.0 if it passes the dtest.


was (Author: mambocab):
[~benedict] I think I understand now; thanks for the explanation.

I've [changed the test|https://github.com/mambocab/cassandra-dtest/commit/25ee5b7050e96a85cd4e33eadc41a21cec7da393]
so that it checks {{MemtableOnHeapSize}} for versions >= 2.1 and {{MemtableDataSize}} for
2.0. As you indicated, it passes on 2.1.4 and trunk. It currently fails on 2.0.4.

I can't seem to apply your patch to 2.0; it looks like it was written for 2.1? But I'm +1
on the same logic applied to 2.0 if it passes the dtest.

> Delete-only workloads crash Cassandra
> -------------------------------------
>
>                 Key: CASSANDRA-9194
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9194
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: 2.0.14
>            Reporter: Robert Wille
>            Assignee: Benedict
>             Fix For: 2.0.15
>
>         Attachments: 9194.txt
>
>
> The size of a tombstone is not properly accounted for in the memtable. A memtable which
has only tombstones will never get flushed. It will grow until the JVM runs out of memory.
The following program easily demonstrates the problem.
> {code}
> 		Cluster.Builder builder = Cluster.builder();
> 		
> 		Cluster c = builder.addContactPoints("cas121.devf3.com").build();
> 		
> 		Session s = c.connect();
> 			
> 		s.execute("CREATE KEYSPACE IF NOT EXISTS test WITH replication = { 'class' : 'SimpleStrategy',
'replication_factor' : 1 }");
> 		s.execute("CREATE TABLE IF NOT EXISTS test.test(id INT PRIMARY KEY)");
> 		PreparedStatement stmt = s.prepare("DELETE FROM test.test WHERE id = :id");
> 		int id = 0;
> 		
> 		while (true)
> 		{
> 			s.execute(stmt.bind(id));
> 			
> 			id++;
> 		}{code}
> This program should run forever, but eventually Cassandra runs out of heap and craps
out. You needn't wait for Cassandra to crash. If you run "nodetool cfstats test.test" while
it is running, you'll see Memtable cell count grow, but Memtable data size will remain 0.
> This issue was fixed once before. I received a patch for version 2.0.5 (I believe), which
contained the fix, but the fix has apparently been lost, because it is clearly broken, and
I don't see the fix in the change logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message