cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nikolay Kоvshov <nkovs...@yandex.ru>
Subject Re: OOM on heavy write load
Date Mon, 25 Apr 2011 12:21:03 GMT

I assume if I turn off swap it will just die earlier, no ? What is the mechanism of dying
?

>From the link you provided

# Row cache is too large, or is caching large rows
my row_cache is 0

# The memtable sizes are too large for the amount of heap allocated to the JVM
Is my memtable size too large ? I have made it less to surely fit the "magical formula"

Trying to analyze heap dumps gives me the following:

In one case diagram has 3 Memtables about 64 Mb each + 72 Mb "Thread" + 700 Mb "Unreachable
objects"

suspected threats:
7 instances of "org.apache.cassandra.db.Memtable", loaded by "sun.misc.Launcher$AppClassLoader
@ 0x7f29f4992d68" occupy 456,292,912 (48.36%) bytes.
25,211 instances of "org.apache.cassandra.io.sstable.SSTableReader", loaded by "sun.misc.Launcher$AppClassLoader
@ 0x7f29f4992d68" occupy 294,908,984 (31.26%) byte
72 instances of "java.lang.Thread", loaded by "<system class loader>" occupy 143,632,624
(15.22%) bytes. 


In other cases memory analyzer hangs trying to parse 2Gb dump

 

22.04.2011, 17:26, "Jonathan Ellis" <jbellis@gmail.com>;:

>  (0) turn off swap
>  (1) http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors
>
>  On Fri, Apr 22, 2011 at 8:00 AM, Nikolay Kоvshov <nkovshov@yandex.ru>;; wrote:
>>   I am using Cassandra 0.7.0 with following settings
>>
>>   binary_memtable_throughput_in_mb: 64
>>   in_memory_compaction_limit_in_mb: 64
>>   keys_cached 1 million
>>   rows_cached 0
>>
>>   RAM for Cassandra 2 GB
>>
>>   I run very simple test
>>
>>   1 Node with 4 HDDs (1 HDD - commitlog and caches, 3 HDDs - data)
>>   1 KS => 1 CF => 1 Column
>>
>>   I insert data (random key 64 bytes + value 64 bytes) at a maximum possible speed,
trying to hit disk i/o, calculate speed and make sure Cassandra stays alive. It doesn't, unfortunately.
>>   After several hundreds millions of inserts Cassandra always does down by OOM.
Getting it up again doesn't help - after inserting some new data it goes down again. By this
time Cassandra goes to swap and has a lot of tasks pending. I am not inserting anything now
and tasks sloooowly disappear, but it will take her weeks to do all of them.
>>
>>   compaction type: Minor
>>   column family: Standard1
>>   bytes compacted: 3661003227
>>   bytes total in progress: 4176296448
>>   pending tasks: 630
>>
>>   So, what am I (or Cassandra) doing wrong ? I dont want to get Cassandra crashed
without means of repair on heavy load circumstances.
>  --
>  Jonathan Ellis
>  Project Chair, Apache Cassandra
>  co-founder of DataStax, the source for professional Cassandra support
>  http://www.datastax.com

Mime
View raw message