incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Max <cassan...@ajowa.de>
Subject Re: Re: Cassandra 0.7 beta 3 outOfMemory (OOM)
Date Fri, 03 Dec 2010 16:23:52 GMT
Hi,

we increased heap space to 3 GB (with JRocket VM under 32-bit Win with  
4 GB RAM)
but under "heavy" inserts Cassandra is still crashing with OutOfMemory  
error after a GC storm.

It sounds very similar to https://issues.apache.org/jira/browse/CASSANDRA-1177

In our insert-tests the average heap usage is slowly growing up to the  
3 GB border (jconsole monitor over 50 min  
http://oi51.tinypic.com/k12gzd.jpg) and the CompactionManger queue is  
also constantly growing up to about 50 jobs pending.

We tried to decrease CF memtable threshold but after about half a  
million inserts it's over.

- Cassandra 0.7.0 beta 3
- Single Node
- about 200 inserts/s   ~500byte - 1 kb


Is there no other possibility instead of slowing down inserts/s ?

What could be an indicator to see if a node works stable with this  
amount of inserts?

Thank you for your answer,
Max


Aaron Morton <aaron@thelastpickle.com>:

> Sounds like you need to increase the Heap size and/or reduce the   
> memtable_throughput_in_mb and/or turn off the internal caches.   
> Normally the binary memtable thresholds only apply to bulk load   
> operations and it's the per CF memtable_* settings you want to   
> change. I'm not familiar with lucandra though. 
>
> See the section on JVM Heap Size here 
> http://wiki.apache.org/cassandra/MemtableThresholds
>
> Bottom line is you will need more JVM heap memory.
>
> Hope that helps.
> Aaron
>
> On 29 Nov, 2010,at 10:28 PM, cassandra@ajowa.de wrote:
>
> Hi community,
>
> during my tests i had several OOM crashes.
> Getting some hints to find out the problem would be nice.
>
> First cassandra crashes after about 45 min insert test script.
> During the following tests time to OOM was shorter until it started to crash
> even in "idle" mode.
>
> Here the facts:
> - cassandra 0.7 beta 3
> - using lucandra to index about 3 million files ~1kb data
> - inserting with one client to one cassandra node with about 200 files/s
> - cassandra data files for this keyspace grow up to about 20 GB
> - the keyspace only contains the two lucandra specific CFs
>
> Cluster:
> - cassandra single node on windows 32bit, Xeon 2,5 Ghz, 4GB RAM
> - java jre 1.6.0_22
> - heap space first 1GB, later increased to 1,3 GB
>
> Cassandra.yaml:
> default + reduced "binary_memtable_throughput_in_mb" to 128
>
> CFs:
> default + reduced
> min_compaction_threshold: 4
> max_compaction_threshold: 8
>
>
> I think the problem appears always during compaction,
> and perhaps it is a result of large rows (some about 170mb).
>
> Are there more options we could use to work with few memory?
>
> Is it a problem of compaction?
> And how to avoid?
> Slower inserts? More memory?
> Even fewer memtable_throuput or in_memory_compaction_limit?
> Continuous manual major comapction?
>
> I've read
> http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors
> - row_size should be fixed since 0.7 and 200mb is still far away from 2gb
> - only key cache is used a little bit 3600/20000
> - after a lot of writes cassandra crashes even in idle mode
> - memtablesize was reduced and there are only 2 CFs
>
> Several heapdumps in MAT show 60-99% heapusage of compaction thread.

Mime
View raw message