cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From openvictor Open <openvic...@gmail.com>
Subject Re: Abnormal memory consumption
Date Mon, 04 Apr 2011 14:11:19 GMT
Hey Aaron,

Thank you for your kind answer.
This is a test server, the production serveur (single instance at the
moment) has 8 Gb (or 12 Go not decided yet) of RAM. But with it there are
other things running such as :

Solr, Redis, PostGreSQL, Tomcat. The total take up to 1 Gb of RAM when
running and loaded.  I do a personal open source project and I am a student
so I don't have a lot of money, but to be clear Cassandra is used as a
"safe" where I keep all the information. These informations are then
distributed to Redis, PostGreSQL and SolR so they can be exploited then
redistributed to the users of the website. My concern is : is Cassandra
going to be able to live in 7 Go of RAM ? Or should I go for 12 (then 11 Go)
?

My last concern and for me it is a flaw for Cassandra and I am sad to admit
it because I love cassandra : how come that for 6Mb of data, Cassandra feels
the need to fill 500 Mb of RAM ? I can understand the need for, let's say,
100 Mo because of cache and several Memtable being alive at the same time.
But 500 Mb of ram is 80 time the total amount of data I have. redis that you
mentionned uses 50 Mb.


Victor.

2011/4/4 aaron morton <aaron@thelastpickle.com>

> For background see the JVM Heap Size section here
> http://wiki.apache.org/cassandra/MemtableThresholds
>
> You can also add a fudge factor of anywhere from X2 to X8 to the size of
> the memtables. You are in for a very difficult time trying to run cassandra
> with under 500MB of heap space.
>
> Is this just a test or are you hoping to run it in production like this ?
> If you need a small single instance schema free data store would redis suit
> your needs ?
>
> Hope that helps.
> Aaron
>
> On 2 Apr 2011, at 01:34, openvictor Open wrote:
>
> > Hello everybody,
> >
> > I am quite new to Cassandra and I am worried about an apache cassandra
> server that is running on an small isolated server with only 2 Gb of RAM. On
> this server there is very little data in Cassandra (  ~3 Mb only text in
> column values) but there are other servers such as : SolR, Tomcat, Redis,
> PostGreSQL. There is quite a lot of column families (about 15) but some
> column families are empty at the moment. At the moment memory consumption is
> 484 Mb real and 948556 in virtual.
> >
> > I modified the storage-conf ( I am running apache cassandra 0.6.11) I set
> DiskAccessMode in standard since I am running on debian 64 bits. I also set
> the MemtableThroughput to 16 Mb instead of 64 Mb and I lower the Xms value
> to and Xmx to 128M and 256M.
> >
> > My question is : where does this giant memory overhead comes from (484 Mb
> for 3 Mb of data seems insane) ? And more importantly : how can I set
> Cassandra to use maximum let's say 500 Mb, because at this rate Cassandra
> will be well over that limit soon.
> > For information because of security I cannot use JMX, except if there is
> a way to use JMX without an interface through SSH.
> >
> > Thank you for your help.
> > Victor
>
>

Mime
View raw message