cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Adria Arcarons <>
Subject OldGen saturation
Date Tue, 28 Oct 2014 16:02:29 GMT

I work for a company that gathers time series data from different sensors. I've been trying
to set up C* in a single-node test environment in order to have an idea of what performance
will Cassandra give in our use case. To do so I have implemented a test to simulate our real
insertion pattern.

We have about 50.000 CFs of varying size, grouping sensors that are in the same physical location.
Our partition key is made up of the id of the sensor and the type of the value that is being
measured. Hence, a single row for each combination of (sensorId,parameterId). Our primary
key is made up of the partition key + the timestamp and the measured value. Moreover, we have
a clustering key by timestamps in order to make slice reads fast.

The writing test consists of a continuous flow of inserts. The inserts are done inside BATCH
statements in groups of 1.000 to a single CF at a time to make them faster. The client is
executed in a separate machine.

The problem I'm experiencing is that, eventually, when the script has been running for almost
40mins, the heap gets saturated. OldGen gets full and then there is an intensive GC activity
trying to free OldGen objects, but it can only free very little space in each pass. Then GC
saturates the CPU. Here are the graphs obtained with VisualVM that show this behavior:

HEAP usage:
OLDGEN full (via VisualGC):

Moreover, when the heap is saturated, IO activity drops, from avg 90% of utilization of HD
to roughly 15%. So I end up in a situation where very few data is flushed, very few data is
freed from memory, and insert rate gets very slow. If the insert process is stopped, C* completes
all its pending flushes and after a certain time GC activity stops but OldGen occupancy remains
almost full.

Why the GC is not capable of freeing more memory?
Isn't cassandra supposed to stop accepting writes until a certain amount of memory is freed?
I'm sceptic about increasing the size of the memtables. If the IO subsystem isn't able to
cope with the flush activity, the problem would only be delayed.
Can this problem be related in any way to our CF indexing settings?
Why, after completing all pending flushes and compactions, OldGen is still almost full, even
when mct is set to 0.15?
Is the BATCH statement the appropriate to insert multiple values inside the same CF?

Any thoughts on this would be appreciated. I can provide full logs or config files to anyone

AdriĆ .

P.S. Details on the setup:
I'm working with the default values except for:
- offheap_objects enabled
- on-heap memtable size set to 128mb. I've experienced that this problem is reproduced also
with greater on-heap memtable sizes.
- off-heap memtable size set to 2.5GB.
- The number of memtable flusher threads is 3.
- memtable_flush_threshold is set to 0.15 to perform regular flushes to disk.

My total heap size is 1GB and the the NewGen region of 256MB. The C* node has 4GB RAM. Intel
Xeon CPU E5520 @ 2.27GHz (3 cores). SATA 500GB HD. Debian 7+Cassandra 2.1.0 + Oracle Java
JRE  (build 1.7.0_71-b14). Regarding the writing client, it is implemented in PHP with the
YACassandraPDO CQL library, which is based on thrift. The client is executed in a separate

View raw message