cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ricardo Bartolome (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-13999) Segfault during memtable flush
Date Thu, 09 Nov 2017 13:50:00 GMT


Ricardo Bartolome commented on CASSANDRA-13999:

Hi [~beobal]. We just realised the debug.log fragment we provided initially is wrong, because
it's related with a different stacktrace that we got in the meanwhile and we think it's related.

So I did the following:
* Deleted node_crashing_debug.log to avoid confussion
* Upload flush_exception_debug_fragment.log.obfuscated which is what we get from our logging
system (we no longer have the debug.log files. We'll custody them more carefully next time)

In regards with the other segfault we suffered, which we think it's related and stacktrace
is very similar to CASSANDRA-12590
* Upload cassandra-jvm-file-error-1509717499-pid10419.log.obfuscated
* Upload compaction_exception_debug_fragment.obfuscated.log, which is the debug.log fragment
that you saw initially.

> Segfault during memtable flush
> ------------------------------
>                 Key: CASSANDRA-13999
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Local Write-Read Paths
>         Environment: * Cassandra 3.9
> * Oracle JDK 1.8.0_112 and 1.8.0_131
> * Kernel 4.9.43-17.38.amzn1.x86_64 and 3.14.35-28.38.amzn1.x86_64
>            Reporter: Ricardo Bartolome
>            Priority: Critical
>         Attachments: cassandra-jvm-file-error-1509698372-pid16151.log.obfuscated, cassandra-jvm-file-error-1509717499-pid10419.log.obfuscated,
cassandra_config.yaml, compaction_exception_debug_fragment.obfuscated.log, flush_exception_debug_fragment.obfuscated.log
> We are getting segfaults on a production Cassandra cluster, apparently caused by Memtable
flushes to disk.
> {code}
> Current thread (0x000000000cd77920):  JavaThread "PerDiskMemtableFlushWriter_0:140" daemon
[_thread_in_Java, id=28952, stack(0x00007f8b7aa53000,0x00007f8b7aa94000)]
> {code}
> Stack
> {code}
> Stack: [0x00007f8b7aa53000,0x00007f8b7aa94000],  sp=0x00007f8b7aa924a0,  free space=253k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
> J 21889 C2;)Lorg/apache/cassandra/db/RowIndexEntry;
(361 bytes) @ 0x00007f8e9fcf75ac [0x00007f8e9fcf42c0+0x32ec]
> J 22464 C2 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents()V (383
bytes) @ 0x00007f8e9f17b988 [0x00007f8e9f17b5c0+0x3c8]
> j  org.apache.cassandra.db.Memtable$;+1
> j  org.apache.cassandra.db.Memtable$;+1
> J 18865 C2 (126 bytes) @ 0x00007f8e9d3c9540 [0x00007f8e9d3c93a0+0x1a0]
> J 21832 C2 java.util.concurrent.ThreadPoolExecutor.runWorker(Ljava/util/concurrent/ThreadPoolExecutor$Worker;)V
(225 bytes) @ 0x00007f8e9f16856c [0x00007f8e9f168400+0x16c]
> J 6720 C1 java.util.concurrent.ThreadPoolExecutor$ (9 bytes) @ 0x00007f8e9def73c4
> J 22079 C2 (17 bytes) @ 0x00007f8e9e67c4ac [0x00007f8e9e67c460+0x4c]
> v  ~StubRoutines::call_stub
> V  []  JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*,
> V  []  JavaCalls::call_virtual(JavaValue*, KlassHandle, Symbol*, Symbol*,
JavaCallArguments*, Thread*)+0x321
> V  []  JavaCalls::call_virtual(JavaValue*, Handle, KlassHandle, Symbol*,
Symbol*, Thread*)+0x47
> V  []  thread_entry(JavaThread*, Thread*)+0xa0
> V  []  JavaThread::thread_main_inner()+0x103
> V  []  JavaThread::run()+0x11c
> V  []  java_start(Thread*)+0x108
> C  []  start_thread+0xc5
> {code}
> For further details, we attached:
> * JVM error file with all details
> * cassandra config file (we are using offheap_buffers as memtable_allocation_method)
> * some lines printed in debug.log when the JVM error file was created and process died
> h5. Reproducing the issue
> So far we have been unable to reproduce it. It happens once/twice a week on single nodes.
It happens either during high load or low load times. We have seen that when we replace EC2
instances and bootstrap new ones, due to compactions happening on source nodes before stream
starts, sometimes more than a single node was affected by this, letting us with 2 out of 3
replicas out and UnavailableExceptions in the cluster.
> This issue might have relation with CASSANDRA-12590 (Segfault reading secondary index)
even this is the write path. Can someone confirm if both issues could be related? 
> h5. Specifics of our scenario:
> * Cassandra 3.9 on Amazon Linux (previous to this, we were running Cassandra 2.0.9 and
there are no records of this also happening, even I was not working on Cassandra)
> * 12 x i3.2xlarge EC2 instances (8 core, 64GB RAM)
> * a total of 176 keyspaces (there is a per-customer pattern)
> ** Some keyspaces have a single table, while others have 2 or 5 tables
> ** There is a table that uses standard Secondary Indexes ("emailindex" on "user_info"
> * It happens on both Oracle JDK 1.8.0_112 and 1.8.0_131
> * It happens in both kernel 4.9.43-17.38.amzn1.x86_64 and 3.14.35-28.38.amzn1.x86_64
> h5. Possible workarounds/solutions that we have in mind (to be validated yet)
> * switching to heap_buffers (in case offheap_buffers triggers the bug), even we are still
pending to measure performance degradation under that scenario.
> * removing secondary indexes in favour of Materialized Views for this specific case,
even we are concerned too about the fact that using MVs introduces new issues that may be
present in our current Cassandra 3.9
> * Upgrading to 3.11.1 is an option, but we are trying to keep it as last resort given
that the cost of migrating is big and we don't have any guarantee that new bugs that affects
nodes availability are not introduced.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message