cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Yaskevich (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-11383) SASI index build leads to massive OOM
Date Sat, 19 Mar 2016 19:42:33 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15202934#comment-15202934
] 

Pavel Yaskevich commented on CASSANDRA-11383:
---------------------------------------------

[~doanduyhai] Let me first elaborate what I mean by "it's not sparse" - SPARSE meant to be
used when there are a lot index *values* and each of the values has *less than 5 keys* so
it's *SPARSE*ly found in the index. SPARSE have to do more with keys/tokens than values, that's
why example uses "created_at" since that would have a lot of values and each of the values
would, most likely, only have a single token/key attached to it. We actually detect this situation
and actual index is going to be constructed correctly even if SPARSE mode was set on not sparse
column.

Regarding LCS - it's LeveledCompactionStrategy where you can set maximum sstable size, I would
suggest you make it something like 1G or less because stitching and OOM you see is directly
related to the size of sstable file. 

Meanwhile I working on the fix for current situation.

> SASI index build leads to massive OOM
> -------------------------------------
>
>                 Key: CASSANDRA-11383
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11383
>             Project: Cassandra
>          Issue Type: Bug
>          Components: CQL
>         Environment: C* 3.4
>            Reporter: DOAN DuyHai
>         Attachments: CASSANDRA-11383.patch, new_system_log_CMS_8GB_OOM.log, system.log_sasi_build_oom
>
>
> 13 bare metal machines
> - 6 cores CPU (12 HT)
> - 64Gb RAM
> - 4 SSD in RAID0
>  JVM settings:
> - G1 GC
> - Xms32G, Xmx32G
> Data set:
>  - ≈ 100Gb/per node
>  - 1.3 Tb cluster-wide
>  - ≈ 20Gb for all SASI indices
> C* settings:
> - concurrent_compactors: 1
> - compaction_throughput_mb_per_sec: 256
> - memtable_heap_space_in_mb: 2048
> - memtable_offheap_space_in_mb: 2048
> I created 9 SASI indices
>  - 8 indices with text field, NonTokenizingAnalyser,  PREFIX mode, case-insensitive
>  - 1 index with numeric field, SPARSE mode
>  After a while, the nodes just gone OOM.
>  I attach log files. You can see a lot of GC happening while index segments are flush
to disk. At some point the node OOM ...
> /cc [~xedin]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message