incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <jbel...@gmail.com>
Subject Re: OOM on heavy write load
Date Fri, 22 Apr 2011 13:26:59 GMT
(0) turn off swap
(1) http://www.datastax.com/docs/0.7/troubleshooting/index#nodes-are-dying-with-oom-errors

On Fri, Apr 22, 2011 at 8:00 AM, Nikolay KŠ¾vshov <nkovshov@yandex.ru> wrote:
> I am using Cassandra 0.7.0 with following settings
>
> binary_memtable_throughput_in_mb: 64
> in_memory_compaction_limit_in_mb: 64
> keys_cached 1 million
> rows_cached 0
>
> RAM for Cassandra 2 GB
>
> I run very simple test
>
> 1 Node with 4 HDDs (1 HDD - commitlog and caches, 3 HDDs - data)
> 1 KS => 1 CF => 1 Column
>
> I insert data (random key 64 bytes + value 64 bytes) at a maximum possible speed, trying
to hit disk i/o, calculate speed and make sure Cassandra stays alive. It doesn't, unfortunately.
> After several hundreds millions of inserts Cassandra always does down by OOM. Getting
it up again doesn't help - after inserting some new data it goes down again. By this time
Cassandra goes to swap and has a lot of tasks pending. I am not inserting anything now and
tasks sloooowly disappear, but it will take her weeks to do all of them.
>
> compaction type: Minor
> column family: Standard1
> bytes compacted: 3661003227
> bytes total in progress: 4176296448
> pending tasks: 630
>
> So, what am I (or Cassandra) doing wrong ? I dont want to get Cassandra crashed without
means of repair on heavy load circumstances.
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.com

Mime
View raw message