incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <>
Subject Re: Cassandra out of Heap memory
Date Mon, 18 Jun 2012 00:44:11 GMT
Not commenting on the GC advice but Cassandra memory usage has improved a lot since that was
written. I would take a look at what was happening and see if tweeking Cassandra config helped
before modifying GC settings.

> " 88): Heap is .9934 full." Is this expected? or
> should I adjust my flush_largest_memtable_at variable.
flush_largetsmemtable_at is a a safety valve only. Reducing it may help avid OOM, by it will
not treat the cause. 

What version are you using ? 

1.0.0 had a an issue where deletes were not taken into consideration (
but this does not sound like the same problem. 

Take a look in the logs on the machine and see if it was associated with a compaction or repair

I would also consider experimenting on one node with 8GB / 800MB heap sizes. More is not always

Aaron Morton
Freelance Developer

On 14/06/2012, at 8:05 PM, rohit bhatia wrote:

> Looking at
> and server logs, I think my situation is this
> "The default cassandra settings has the highest peak heap usage. The
> problem with this is that it raises the possibility that during the
> CMS cycle, a collection of the young generation runs out of memory to
> migrate objects to the old generation (a so-called concurrent mode
> failure), leading to stop-the-world full garbage collection. However,
> with a slightly lower setting of the CMS threshold, we get a bit more
> headroom, and more stable overall performance."
> I see concurrentMarkSweep system.log Entries trying to gc 2-4 collections.
> Any suggestions for preemptive measure for this would be welcome.

View raw message