cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Stefan Podkowinski (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-12831) OutOfMemoryError with Cassandra 3.0.9
Date Mon, 13 Feb 2017 13:26:41 GMT


Stefan Podkowinski commented on CASSANDRA-12831:

Looks like many netty related allocations in the heap dump. 3.9 is using an updated netty
version, maybe that's why you're not seeing this kind of GC issues there. We've updated netty
in 3.0.11 as well, but it's not released yet. 
But you can already give it a try and see if this is still an issue creating another ccm cluster
from git, e.g. {{ccm create 3.0-latest -v git:cassandra-3.0 -n 1}}.

> OutOfMemoryError with Cassandra 3.0.9
> -------------------------------------
>                 Key: CASSANDRA-12831
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Fedora 24  / Java  1.8.0_91 / Cassandra 3.0.9
> Mac OS X 10.11.6 / Java 1.8.0_102 / Cassandra 3.0.9
>            Reporter: John Sanda
>         Attachments: system.log
> I have running some tests on a monitoring system I work on and Cassandra is consistently
crashing with OOMEs, and the JVM exists. This is happening in a dev environment with a single
node created with ccm. 
> The monitoring server is ingesting 4,000 data points every 10 seconds. Every two hours
a job runs which fetches all raw data from the past two hours. The raw data is compressed,
written to another table, and then deleted. After 3 or 4 runs of the job Cassandra crashes.
Initially I thought that the problem was in my application code, but I am no longer of that
opinion because I set up the same test environment with Cassandra 3.9, and it has been running
for almost 48 hours without error. And I actually increased the load on the 3.9 environment.
> The schema for the raw data which is queried looks like:
> {noformat}
>     tenant_id text,
>     type tinyint,
>     metric text,
>     dpart bigint,
>     time timeuuid,
>     n_value double,
>     tags map<text, text>,
>     PRIMARY KEY ((tenant_id, type, metric, dpart), time)
> {noformat}
> And the schema for the table that is written to:
> {noformat}
> CREATE TABLE hawkular_metrics.data_compressed (
>     tenant_id text,
>     type tinyint,
>     metric text,
>     dpart bigint,
>     time timestamp,
>     c_value blob,
>     tags blob,
>     ts_value blob,
>     PRIMARY KEY ((tenant_id, type, metric, dpart), time)
> {noformat}
> I am using version 3.0.1 of the DataStax Java driver. Last night I changed the driver's
page size from the default to 1000, and so far I have not yet seen any errors.
> I have attached the log file. I was going to attach one of the heap dumps, but it looks
like they are too big.

This message was sent by Atlassian JIRA

View raw message