cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pushpak (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-12831) OutOfMemoryError with Cassandra 3.0.9
Date Fri, 10 Feb 2017 09:50:41 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-12831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15861006#comment-15861006
] 

Pushpak edited comment on CASSANDRA-12831 at 2/10/17 9:50 AM:
--------------------------------------------------------------

We are also facing an OOM issue with Cassandra 3.0.9. Any one node into the Cassandra cluster
crashes every alternate day.

We Analyzed the Heap dump it seems that the Thrift Objects are around 128MB per Object and
there are around 55 such Objects. Our Cassandra heap size is configured to 8GB.

Heap dump screen : http://stackoverflow.com/questions/42022677/cassandra-outofmemory






was (Author: pushpakde):
We are also facing an OOM issue with Cassandra 3.0.9. Any one node into the Cassandra cluster
crashes every alternate day.

We Analyzed the Heap dump it seems that the Thrift Objects are around 128MB and there are
around 55 such Objects. Our Cassandra heap size is configured to 8GB.

Heap dump screen : http://stackoverflow.com/questions/42022677/cassandra-outofmemory





> OutOfMemoryError with Cassandra 3.0.9
> -------------------------------------
>
>                 Key: CASSANDRA-12831
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12831
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Fedora 24  / Java  1.8.0_91 / Cassandra 3.0.9
> Mac OS X 10.11.6 / Java 1.8.0_102 / Cassandra 3.0.9
>            Reporter: John Sanda
>         Attachments: system.log
>
>
> I have running some tests on a monitoring system I work on and Cassandra is consistently
crashing with OOMEs, and the JVM exists. This is happening in a dev environment with a single
node created with ccm. 
> The monitoring server is ingesting 4,000 data points every 10 seconds. Every two hours
a job runs which fetches all raw data from the past two hours. The raw data is compressed,
written to another table, and then deleted. After 3 or 4 runs of the job Cassandra crashes.
Initially I thought that the problem was in my application code, but I am no longer of that
opinion because I set up the same test environment with Cassandra 3.9, and it has been running
for almost 48 hours without error. And I actually increased the load on the 3.9 environment.
> The schema for the raw data which is queried looks like:
> {noformat}
> CREATE TABLE hawkular_metrics.data (
>     tenant_id text,
>     type tinyint,
>     metric text,
>     dpart bigint,
>     time timeuuid,
>     n_value double,
>     tags map<text, text>,
>     PRIMARY KEY ((tenant_id, type, metric, dpart), time)
> ) WITH CLUSTERING ORDER BY (time DESC)
> {noformat}
> And the schema for the table that is written to:
> {noformat}
> CREATE TABLE hawkular_metrics.data_compressed (
>     tenant_id text,
>     type tinyint,
>     metric text,
>     dpart bigint,
>     time timestamp,
>     c_value blob,
>     tags blob,
>     ts_value blob,
>     PRIMARY KEY ((tenant_id, type, metric, dpart), time)
> ) WITH CLUSTERING ORDER BY (time DESC)
> {noformat}
> I am using version 3.0.1 of the DataStax Java driver. Last night I changed the driver's
page size from the default to 1000, and so far I have not yet seen any errors.
> I have attached the log file. I was going to attach one of the heap dumps, but it looks
like they are too big.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message