incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jeremiah Jordan" <>
Subject RE: Abnormal memory consumption
Date Thu, 07 Apr 2011 21:33:53 GMT
Connect with jconsole and watch the memory consumption graph.  Click the
force GC button watch what the low point is, that is how much memory is
being used for persistent stuff, the rest is garbage generated while
satisfying queries.  Run a query, watch how the graph spikes up when you
run your query, that is how much is needed for the query.  Like others
have said, Cassandra isn't using 600Mb of RAM, the Java Virtual Machine
is using 600Mb of RAM, because your settings told it it could.  The JVM
will use as much memory as your settings allow it to.  If you really are
putting that little data into your test server, you should be able to
tune everything down to only 256Mb easily (I do this for test instances
of Cassandra that I spin up to run some tests on), maybe further.


From: openvictor Open [] 
Sent: Wednesday, April 06, 2011 7:59 PM
Subject: Re: Abnormal memory consumption

Hello Paul,

Thank you for the tip. The random port attribution policy of JMX was
really making me mad ! Good to know there is a solution for that

Concerning the rest of the conversation, my only concern is that as an
administrator and a student it is hard to constantly watch  Cassandra
instances so that they don't crash. As much as I love the principle of
Cassandra, being constantly afraid of memory consumption is an issue in
my opinion. That being said, I took a new 16 Gb server today, but I
don't want Cassandra to eat up everything if it is not needed, because
Cassandra will have some neighbors such as Tomcat, solR on this server. 
And for me it is very weird that on my small instance where I put a lot
of constraints like throughput_memtableInMb to 6 Cassandra uses 600 Mb
of ram for 6 Mb of data. It seems to be a little bit of an overkill to
me... And so far I failed to find any information on what this massive
overhead can be...

Thank you for your answers and for taking the time to answer my

2011/4/6 Paul Choi <>

	You can use JMX over ssh by doing this:
	Basically, you use SSH -D to do dynamic application port

	In terms of scaling, you'll be able to afford 120GB RAM/node in
3 years if you're successful. Or, a machine with much less RAM and
flash-based storage. :)
	Seriously, though, the formula in the tuning guidelines is a
guideline. You can probably get acceptable performance with much less.
If not, you can shard your app such that you host a few Cfs per cluster.
I doubt you'll need to though.

	From: openvictor Open <>
	Reply-To: <>
	Date: Mon, 4 Apr 2011 18:24:25 -0400
	To: <>
	Subject: Re: Abnormal memory consumption

	Okay, I see. But isn't there a big issue for scaling here ? 
	Imagine that I am the developper of a certain very successful
website : At year 1 I need 20 CF. I might need to have 8Gb of RAM. Year
2 I need 50 CF because I added functionalities to my wonderful webiste
will I need 20 Gb of RAM ? And if at year three I had 300 Column
families, will I need 120 Gb of ram / node ? Or did I miss something
about memory consuption ?
	Thank you very much,
	2011/4/4 Peter Schuller <>

		> And about the production 7Gb or RAM is sufficient ? Or
11 Gb is the minimum
		> ?
		> Thank you for your inputs for the JVM I'll try to tune
		Production mem reqs are mostly dependent on memtable
		If you enable key caching or row caching, you will have
to adjust
		accordingly as well.
		/ Peter Schuller

View raw message