cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mohammed Guller <>
Subject Re: C* throws OOM error despite use of automatic paging
Date Tue, 13 Jan 2015 05:19:03 GMT
There are no tombstones.


On Jan 12, 2015, at 9:11 PM, Dominic Letz <<>>

Does your use case include many tombstones? If yes then that might explain the OOM situation.

If you want to know for sure you can enable the heap dump generation on crash in
just uncomment JVM_OPTS="$JVM_OPTS -XX:+HeapDumpOnOutOfMemoryError" and then run your query
again. The heapdump will have the answer.

On Tue, Jan 13, 2015 at 10:54 AM, Mohammed Guller <<>>
The heap usage is pretty low ( less than 700MB) when the application starts. I can see the
heap usage gradually climbing once the application starts. C* does not log any errors before
OOM happens.

Data is on EBS. Write throughput is quite high with two applications simultaneously pumping
data into C*.


From: Ryan Svihla [<>]
Sent: Monday, January 12, 2015 3:39 PM
To: user

Subject: Re: C* throws OOM error despite use of automatic paging

I think it's more accurate that to say that auto paging prevents one type of OOM. It's premature
to diagnose it as 'not happening'.

What is heap usage when you start? Are you storing your data on EBS? What kind of write throughput
do you have going on at the same time? What errors do you have in the cassandra logs before
this crashes?

On Sat, Jan 10, 2015 at 1:48 PM, Mohammed Guller <<>>
nodetool cfstats shows 9GB. We are storing simple primitive value. No blobs or collections.


From: DuyHai Doan [<>]
Sent: Friday, January 9, 2015 12:51 AM
Subject: Re: C* throws OOM error despite use of automatic paging

What is the data size of the column family you're trying to fetch with paging ? Are you storing
big blob or just primitive values ?

On Fri, Jan 9, 2015 at 8:33 AM, Mohammed Guller <<>>
Hi –

We have an ETL application that reads all rows from Cassandra (2.1.2), filters them and stores
a small subset in an RDBMS. Our application is using Datastax’s Java driver (2.1.4) to fetch
data from the C* nodes. Since the Java driver supports automatic paging, I was under the impression
that SELECT queries should not cause an OOM error on the C* nodes. However, even with just
16GB data on each nodes, the C* nodes start throwing OOM error as soon as the application
starts iterating through the rows of a table.

The application code looks something like this:

Statement stmt = new SimpleStatement("SELECT x,y,z FROM cf").setFetchSize(5000);
ResultSet rs = session.execute(stmt);
while (!rs.isExhausted()){
      row =

Even after we reduced the page size to 1000, the C* nodes still crash. C* is running on M3.xlarge
machines (4-cores, 15GB). We manually increased the heap size to 8GB just to see how much
heap C* consumes. With 10-15 minutes, the heap usage climbs up to 7.6GB. That does not make
sense. Either automatic paging is not working or we are missing something.

Does anybody have insights as to what could be happening? Thanks.



Ryan Svihla

Dominic Letz
Director of R&D

View raw message