Linux's default on busy IO boxes is to use all available memory for cache. Try "echo 1 > /proc/sys/vm/drop_caches" and see if your memory comes back (this will drop vfs caches, and in my experience is safe, but YMMV).

If your memory comes back, everything is normal and you should leave it alone.  It may block for a while if you have a lot of unflushed pages, this is expected. Try setting /proc/sys/vm/dirty_ratio lower if you notice around 20% of your memory is being consumed for "dirty" (written pages not flushed to storage) memory. I usually run all of my systems at 5 or lower. 20 is too high for large memory servers IMO.


On Wed, Jun 13, 2012 at 11:01 AM, Poziombka, Wade L <> wrote:

actually, this is without jna.jar.  I will add and see if still have same issue


From: Poziombka, Wade L

Sent: Wednesday, June 13, 2012 10:53 AM
Subject: RE: Much more native memory used by Cassandra then the configured JVM heap size


Seems like my only recourse is to remove jna.jar and just take the performance/swapping pain?


Obviously can’t have the entire box lock up.  I can provide a pmap etc. if needed.


From: Poziombka, Wade L []

Sent: Wednesday, June 13, 2012 10:28 AM
Subject: RE: Much more native memory used by Cassandra then the configured JVM heap size


I have experienced the same issue.  The Java heap seems fine but eventually the OS runs out of heap.  In my case it renders the entire box unusable without a hard reboot.  Console shows:


is there a way to limit the native heap usage?


xfs invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0
Call Trace:
 [<ffffffff800c9d3a>] out_of_memory+0x8e/0x2f3
 [<ffffffff8002dfd7>] __wake_up+0x38/0x4f
 [<ffffffff8000f677>] __alloc_pages+0x27f/0x308
 [<ffffffff80013034>] __do_page_cache_readahead+0x96/0x17b
 [<ffffffff80013971>] filemap_nopage+0x14c/0x360
 [<ffffffff8000896c>] __handle_mm_fault+0x1fd/0x103b
 [<ffffffff8002dfd7>] __wake_up+0x38/0x4f
 [<ffffffff800671f2>] do_page_fault+0x499/0x842
 [<ffffffff800b8f39>] audit_filter_syscall+0x87/0xad
 [<ffffffff8005dde9>] error_exit+0x0/0x84
Node 0 DMA per-cpu: empty
Node 0 DMA32 per-cpu: empty
Node 0 Normal per-cpu:
cpu 0 hot: high 186, batch 31 used:23
cpu 0 cold: high 62, batch 15 used:14

cpu 23 cold: high 62, batch 15 used:8

Node 1 HighMem per-cpu: empty

Free pages:      158332kB (0kB HighMem)

Active:16225503 inactive:1 dirty:0 writeback:0 unstable:0 free:39583 slab:21496

Node 0 DMA free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB


lowmem_reserve[]: 0 0 32320 32320

Node 0 DMA32 free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB


lowmem_reserve[]: 0 0 32320 32320

Node 0 Normal free:16136kB min:16272kB low:20340kB high:24408kB active:3255624



From: aaron morton []
Sent: Tuesday, June 12, 2012 4:08 AM
Subject: Re: Much more native memory used by Cassandra then the configured JVM heap size




which cause the OS low memory.

If the memory is used for mmapped access the os can get it back later. 


Is the low free memory causing a problem ?






Aaron Morton

Freelance Developer



On 12/06/2012, at 5:52 PM, Jason Tang wrote:




I found some information of this issue

And seems we can have other strategy for data access to reduce mmap usage, in order to use less memory.


But I didn't find the document to describe the parameters for Cassandra 1.x, is it a good way to use this parameter to reduce shared memory usage and what's the impact? (btw, our data model is dynamical, which means the although the through put is high, but the life cycle of the data is short, one hour or less).



# Choices are auto, standard, mmap, and mmap_index_only.

disk_access_mode: auto


2012/6/12 Jason Tang <>

See my post, I limit the HVM heap 6G, but actually Cassandra will use more memory which is not calculated in JVM heap. 


I use top to monitor total memory used by Cassandra.



-Xms6G -Xmx6G -Xmn1600M


2012/6/12 Jeffrey Kesselman <>

Btw.  I suggest you spin up JConsole as it will give you much more detai kon what your VM is actually doing.


On Mon, Jun 11, 2012 at 9:14 PM, Jason Tang <> wrote:



We have some problem with Cassandra memory usage, we configure the JVM HEAP 6G, but after runing Cassandra for several hours (insert, update, delete). The total memory used by Cassandra go up to 15G, which cause the OS low memory.

So I wonder if it is normal to have so many memory used by cassandra?


        And how to limit the native memory used by Cassandra?




Cassandra 1.0.3, 64 bit jdk.


Memory ocupied by Cassandra 15G


 9567 casadm    20   0 28.3g  15g 9.1g S  269 65.1 385:57.65 java



-Xms6G -Xmx6G -Xmn1600M


 # ps -ef | grep  9567

casadm    9567     1 55 Jun11 ?        05:59:44 /opt/jdk1.6.0_29/bin/java -ea -javaagent:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms6G -Xmx6G -Xmn1600M -XX:+HeapDumpOnOutOfMemoryError -Xss128k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Dpasswd.mode=MD5 -Dlog4j.defaultInitOverride=true -cp /opt/dve/cassandra/bin/../conf:/opt/dve/cassandra/bin/../build/classes/main:/opt/dve/cassandra/bin/../build/classes/thrift:/opt/dve/cassandra/bin/../lib/Cassandra-Extensions-1.0.0.jar:/opt/dve/cassandra/bin/../lib/antlr-3.2.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-clientutil-1.0.3.jar:/opt/dve/cassandra/bin/../lib/apache-cassandra-thrift-1.0.3.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-fixes.jar:/opt/dve/cassandra/bin/../lib/avro-1.4.0-sources-fixes.jar:/opt/dve/cassandra/bin/../lib/commons-cli-1.1.jar:/opt/dve/cassandra/bin/../lib/commons-codec-1.2.jar:/opt/dve/cassandra/bin/../lib/commons-lang-2.4.jar:/opt/dve/cassandra/bin/../lib/compress-lzf-0.8.4.jar:/opt/dve/cassandra/bin/../lib/concurrentlinkedhashmap-lru-1.2.jar:/opt/dve/cassandra/bin/../lib/guava-r08.jar:/opt/dve/cassandra/bin/../lib/high-scale-lib-1.1.2.jar:/opt/dve/cassandra/bin/../lib/jackson-core-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jackson-mapper-asl-1.4.0.jar:/opt/dve/cassandra/bin/../lib/jamm-0.2.5.jar:/opt/dve/cassandra/bin/../lib/jline-0.9.94.jar:/opt/dve/cassandra/bin/../lib/json-simple-1.1.jar:/opt/dve/cassandra/bin/../lib/libthrift-0.6.jar:/opt/dve/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/dve/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/dve/cassandra/bin/../lib/slf4j-api-1.6.1.jar:/opt/dve/cassandra/bin/../lib/slf4j-log4j12-1.6.1.jar:/opt/dve/cassandra/bin/../lib/snakeyaml-1.6.jar:/opt/dve/cassandra/bin/../lib/snappy-java- org.apache.cassandra.thrift.CassandraDaemon



# nodetool -h -p 6080 info

Token            : 85070591730234615865843651857942052864

Gossip active    : true

Load             : 20.59 GB

Generation No    : 1339423322

Uptime (seconds) : 39626

Heap Memory (MB) : 3418.42 / 5984.00

Data Center      : datacenter1

Rack             : rack1

Exceptions       : 0



All row cache and key cache are disabled by default


                Key cache: disabled

                Row cache: disabled





# pmap 9567

9567: java


0000000040000000     36K     36K     36K      0K      0K r-xp /opt/jdk1.6.0_29/bin/java

0000000040108000      8K      8K      8K      8K      0K rwxp /opt/jdk1.6.0_29/bin/java

000000004010a000  18040K  17988K  17988K  17988K      0K rwxp [heap]

000000067ae00000 6326700K 6258664K 6258664K 6258664K      0K rwxp [anon]

00000007fd06b000  48724K      0K      0K      0K      0K rwxp [anon]

00007fbed1530000 1331104K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Data.db

00007fbf22918000 2097152K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Data.db

00007fbfa2918000 2097148K 1124464K 1124462K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Data.db

00007fc022917000 2097156K 2096496K 2096492K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Data.db

00007fc0a2918000 2097148K 2097148K 2097146K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Data.db

00007fc1a2917000 733584K   6444K   6444K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-109-Data.db

00007fc1cf57b000 2097148K  20980K  20980K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-109-Data.db

00007fc24f57a000 2097152K 456480K 456478K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-109-Data.db

00007fc2cf57a000 2097156K 1168320K 1168318K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-109-Data.db

00007fc34f57b000 2097148K 1177520K 1177520K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-109-Data.db

00007fc405629000 618708K 338248K 338248K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-230-Data.db

00007fc42b25e000 620388K 289024K 289024K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-224-Data.db

00007fc451037000 619160K 342108K 342108K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-216-Data.db

00007fc62b7df000 132696K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxInQueueTime-hb-175-Data.db

00007fc6de8e0000 132696K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxRecvTime-hb-175-Data.db

00007fc6f2bcc000  52492K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxPartitionId-hb-211-Data.db

00007fc6f64dc000  43784K  40840K  40840K      0K      0K r-xs /var/cassandra/data/drc/fpr_index-hb-91-Data.db

00007fc707ca6000  68968K  37724K  37724K      0K      0K r-xs /var/cassandra/data/drc/queue-hb-219-Index.db

00007fc70c000000   2468K   2436K   2436K   2436K      0K rwxp [anon]

00007fc70c269000  63068K      0K      0K      0K      0K ---p [anon]

00007fc710b9e000  52888K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxInQueueTime-hb-216-Data.db

00007fc713f44000  52952K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxFireTimeRange-hb-140-Data.db

00007fc7172fa000  52952K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxFireTime-hb-140-Data.db

00007fc71bd13000 162992K 162984K 162984K      0K      0K r-xs /var/cassandra/data/drc/fpr_index-hb-80-Data.db

00007fc725c3f000  52952K  28712K  28712K      0K      0K r-xs /var/cassandra/data/drc/queue.idxInQueueTimeRange-hb-140-Data.db

00007fc728ff5000  52952K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxRecvTimeRange-hb-140-Data.db

00007fc72d026000  52480K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxRecvTimeRange-hb-211-Data.db

00007fc730366000  52564K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxStatus-hb-196-Data.db

00007fc7336bb000  22348K      0K      0K      0K      0K r-xs /var/cassandra/data/drc/queue.idxInQueueTime-hb-175-Index.db




//Ares Tang

It's always darkest just before you are eaten by a grue.