On Fri, Jun 21, 2013 at 2:53 AM, aaron morton <firstname.lastname@example.org> wrote:
Do you have 100's of millions of rows ?> nodetool -h localhost flush didn't do much good.
If so see recent discussions about reducing the bloom_filter_fp_chance and index_sampling.Yes, I have 100's of millions of rows.If this is an old schema you may be using the very old setting of 0.000744 which creates a lot of bloom filters.bloom_filter_fp_chance value that was changed from default to 0.1, looked at the filters and they are about 2.5G on disk and I have around 8G of heap.
I will try increasing the value to 0.7 and report my results.It also appears to be a case of hard GC failure (as Rob mentioned) as the heap is never released, even after 24+ hours of idle time, the JVM needs to be restarted to reclaim the heap.
On 20/06/2013, at 6:36 AM, Wei Zhu <email@example.com> wrote:If you want, you can try to force the GC through Jconsole. Memory->Perform GC.
It theoretically triggers a full GC and when it will happen depends on the JVM
-WeiFrom: "Robert Coli" <firstname.lastname@example.org>
Sent: Tuesday, June 18, 2013 10:43:13 AM
Subject: Re: Heap is not released and streaming hangs at 0%
On Tue, Jun 18, 2013 at 10:33 AM, srmore <email@example.com> wrote:
> But then shouldn't JVM C G it eventually ? I can still see Cassandra alive
> and kicking but looks like the heap is locked up even after the traffic is
> long stopped.
No, when GC system fails this hard it is often a permanent failure
which requires a restart of the JVM.
> nodetool -h localhost flush didn't do much good.
This adds support to the idea that your heap is too full, and not full
You could try nodetool -h localhost invalidatekeycache, but that
probably will not free enough memory to help you.