cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Maxim Podkolzine (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-12662) OOM when using SASI index
Date Mon, 26 Sep 2016 13:25:20 GMT


Maxim Podkolzine commented on CASSANDRA-12662:

OK. Here's an update. We've added SSD disks and 20Gb RAM per node. Since we run three nodes,
this is practically the limit we can allocate for Cassandra. The load is the same.
Cassandra fairly quickly crashes with OOM, a glance over hprof shows 4Gb of PartitionUpdates.
Do you think it's possible to configure the nodes with current hardware limitations? Or should
we shift to one big node with big RAM and lots of CPU?

> OOM when using SASI index
> -------------------------
>                 Key: CASSANDRA-12662
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: Linux, 4 CPU cores, 16Gb RAM, Cassandra process utilizes ~8Gb, of
which ~4Gb is Java heap
>            Reporter: Maxim Podkolzine
>            Priority: Critical
>             Fix For: 3.x
>         Attachments: memory-dump.png
> 2.8Gb of the heap is taken by the index data, pending for flush (see the screenshot).
As a result the node fails with OOM.
> Questions:
> - Why can't Cassandra keep up with the inserted data and flush it?
> - What resources/configuration should be changed to improve the performance?

This message was sent by Atlassian JIRA

View raw message