cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <>
Subject Re: Abnormal memory consumption
Date Mon, 04 Apr 2011 11:46:14 GMT
For background see the JVM Heap Size section here

You can also add a fudge factor of anywhere from X2 to X8 to the size of the memtables. You
are in for a very difficult time trying to run cassandra with under 500MB of heap space. 

Is this just a test or are you hoping to run it in production like this ? If you need a small
single instance schema free data store would redis suit your needs ?

Hope that helps.

On 2 Apr 2011, at 01:34, openvictor Open wrote:

> Hello everybody,
> I am quite new to Cassandra and I am worried about an apache cassandra server that is
running on an small isolated server with only 2 Gb of RAM. On this server there is very little
data in Cassandra (  ~3 Mb only text in column values) but there are other servers such as
: SolR, Tomcat, Redis, PostGreSQL. There is quite a lot of column families (about 15) but
some column families are empty at the moment. At the moment memory consumption is 484 Mb real
and 948556 in virtual.
> I modified the storage-conf ( I am running apache cassandra 0.6.11) I set DiskAccessMode
in standard since I am running on debian 64 bits. I also set the MemtableThroughput to 16
Mb instead of 64 Mb and I lower the Xms value to and Xmx to 128M and 256M.
> My question is : where does this giant memory overhead comes from (484 Mb for 3 Mb of
data seems insane) ? And more importantly : how can I set Cassandra to use maximum let's say
500 Mb, because at this rate Cassandra will be well over that limit soon.
> For information because of security I cannot use JMX, except if there is a way to use
JMX without an interface through SSH.
> Thank you for your help.
> Victor

View raw message