cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "William Saar (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-8152) Cassandra crashes with Native memory allocation failure
Date Thu, 01 Jan 2015 13:51:14 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14262548#comment-14262548
] 

William Saar edited comment on CASSANDRA-8152 at 1/1/15 1:50 PM:
-----------------------------------------------------------------

I have this issue as well in a 6-node cluster running Cassandra 2.1.2. I only have 3 '(deleted)'
files in my dump. However, I notice that the Dynamic Libraries section has 131072 lines with
mapped files specified in both my dumps and the ones posted here. 

Others seem to have very similar issues with Cassandra 2.1 in installations with lots of files:
http://grokbase.com/t/cassandra/user/14ckk4xyhe/cassandra-2-1-0-crashes-the-jvm-with-oom-with-heaps-of-memory-free

I did have one node crash earlier with a clean shutdown messsage (not a JVM dump) that contained
an IO exception complaining about too many open files.


was (Author: william):
I have this issue as well in a 6-node cluster running Cassandra 2.1.2. I only have 3 '(deleted)'
files in my dump. However, I notice that the Dynamic Libraries section has 131072 lines with
mapped files specified in both my dumps and the ones posted here. 

Others also may have a very similar issues with Cassandra 2.1 in installations with lots of
files:
http://grokbase.com/t/cassandra/user/14ckk4xyhe/cassandra-2-1-0-crashes-the-jvm-with-oom-with-heaps-of-memory-free

I did have one node crash earlier with a clean shutdown messsage (not a JVM dump) that contained
an IO exception complaining about too many open files.

> Cassandra crashes with Native memory allocation failure
> -------------------------------------------------------
>
>                 Key: CASSANDRA-8152
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8152
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: EC2 (i2.xlarge)
>            Reporter: Babar Tareen
>            Assignee: Brandon Williams
>            Priority: Minor
>         Attachments: db06_hs_err_pid26159.log.zip, db_05_hs_err_pid25411.log.zip
>
>
> On a 6 node Cassandra (datastax-community-2.1) cluster running on EC2 (i2.xlarge) instances,
Jvm hosting the cassandra service randomly crashes with following error.
> {code}
> #
> # There is insufficient memory for the Java Runtime Environment to continue.
> # Native memory allocation (malloc) failed to allocate 12288 bytes for committing reserved
memory.
> # Possible reasons:
> #   The system is out of physical RAM or swap space
> #   In 32 bit mode, the process size limit was hit
> # Possible solutions:
> #   Reduce memory load on the system
> #   Increase physical memory or swap space
> #   Check if swap backing store is full
> #   Use 64 bit Java on a 64 bit OS
> #   Decrease Java heap size (-Xmx/-Xms)
> #   Decrease number of Java threads
> #   Decrease Java thread stack sizes (-Xss)
> #   Set larger code cache with -XX:ReservedCodeCacheSize=
> # This output file may be truncated or incomplete.
> #
> #  Out of Memory Error (os_linux.cpp:2747), pid=26159, tid=140305605682944
> #
> # JRE version: Java(TM) SE Runtime Environment (7.0_60-b19) (build 1.7.0_60-b19)
> # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode linux-amd64 compressed
oops)
> # Failed to write core dump. Core dumps have been disabled. To enable core dumping, try
"ulimit -c unlimited" before starting Java again
> #
> ---------------  T H R E A D  ---------------
> Current thread (0x0000000008341000):  JavaThread "MemtableFlushWriter:2055" daemon [_thread_new,
id=23336, stack(0x00007f9b71c56000,0x00007f9b71c97000)]
> Stack: [0x00007f9b71c56000,0x00007f9b71c97000],  sp=0x00007f9b71c95820,  free space=254k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
> V  [libjvm.so+0x99e7ca]  VMError::report_and_die()+0x2ea
> V  [libjvm.so+0x496fbb]  report_vm_out_of_memory(char const*, int, unsigned long, char
const*)+0x9b
> V  [libjvm.so+0x81d81e]  os::Linux::commit_memory_impl(char*, unsigned long, bool)+0xfe
> V  [libjvm.so+0x81d8dc]  os::pd_commit_memory(char*, unsigned long, bool)+0xc
> V  [libjvm.so+0x81565a]  os::commit_memory(char*, unsigned long, bool)+0x2a
> V  [libjvm.so+0x81bdcd]  os::pd_create_stack_guard_pages(char*, unsigned long)+0x6d
> V  [libjvm.so+0x9522de]  JavaThread::create_stack_guard_pages()+0x5e
> V  [libjvm.so+0x958c24]  JavaThread::run()+0x34
> V  [libjvm.so+0x81f7f8]  java_start(Thread*)+0x108
> {code}
> Changes in cassandra-env.sh settings
> {code}
> MAX_HEAP_SIZE="8G"
> HEAP_NEWSIZE="800M"
> JVM_OPTS="$JVM_OPTS -XX:TargetSurvivorRatio=50"
> JVM_OPTS="$JVM_OPTS -XX:+AggressiveOpts"
> JVM_OPTS="$JVM_OPTS -XX:+UseLargePages"
> {code}
> Writes are about 10K-15K/sec and there are very few reads. Cassandra 2.0.9 with same
settings never crashed. JVM crash logs are attached from two machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message