cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <>
Subject Re: Any way to put a hard limit on memory cap for Cassandra ?
Date Wed, 03 Oct 2012 15:50:13 GMT
There are three places that Cassandra will use non-heap memory:

One is JVM overhead like permgen.  This is a normal part of running
Java-based services and will be very stable and predictable.

Another is the off-heap row cache.  By default no row caching is done,
you have to explicitly enable it per-columnfamily.  You can also
control the maximum cache size in cassandra.yaml.

Finally, Cassandra mmap's all its data files by default.  This is a
frequent source of misunderstanding, because mmaping doesn't mean the
memory is "used" in the normal sense, just that it's mapped into
Cassandra's address space so it can be read most efficiently.  See for more details.

Note that only the JVM memory itself (heap + overhead) is locked by
JNA.  Disabling JNA will only expose you to a very bad experience
should the OS decide to swap out part of the JVM.  Best practice of
course is to disable swap entirely, but JNA is there as a fall back
because many people do not do this correctly.

Directing followups to the Cassandra user mailing list.

On Wed, Oct 3, 2012 at 3:33 AM, Thomas Yu <> wrote:
> Hi Jonathan,
> I'd tried to find any information regarding how I can put a hard limit on real memory
usage by the Cassandra process, and would appreciate any pointers from you in this front.
> I'm using Cassandra 1.0.11, and had been using the ms and mx JVM options to try to limit
the heap usage to 750M memory. However, i find that the actual usage of the Cassandra process
is around 1G, and I understand that is related to the JNA, and locked memory (likely rooted
from the PermGen) in mmap.
> However, what I really want to understand is that if there's any way I can put a hard
limit on the real memory usage of Cassandra ?? Do I have to disable JNA in order to achieve
that ?? Or otherwise, can I fairly estimate that the PermGen shall be pretty stable such that
I can be fairly expect it won't exceed too much out of the 250M that I observed in the behavior
of my application ? What about later releases of Cassandra (e.g. 1.1, or 1.2) ? Is there any
option to help on this front ?
> Thanks in advance for any pointers that you can provide to help me understand this issue.
> Best Regards,
> -Thomas

Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support

View raw message