One or more of these might be effective depending on your particular usage

- remove data (rows especially)
- add nodes
- add ram (has limitations)
- reduce bloom filter space used by increasing fp chance
- reduce row and key cache sizes
- increase index sample ratio
- reduce compaction concurrency and throughput
- upgrade to cassandra 1.2 which does some of these things for you


On Thu, May 30, 2013 at 2:31 PM, srmore <> wrote:
You are right, it looks like I am doing a lot of GC. Is there any short-term solution for this other than bumping up the heap ? because, even if I increase the heap I will run into the same issue. Only the time before I hit OOM will be lengthened.

It will be while before we go to latest and greatest Cassandra.

Thanks !

On Thu, May 30, 2013 at 12:05 AM, Jonathan Ellis <> wrote:
Sounds like you're spending all your time in GC, which you can verify
by checking what GCInspector and StatusLogger say in the log.

Fix is increase your heap size or upgrade to 1.2:

On Wed, May 29, 2013 at 11:32 PM, srmore <> wrote:
> Hello,
> I am observing that my performance is drastically decreasing when my data
> size grows. I have a 3 node cluster with 64 GB of ram and my data size is
> around 400GB on all the nodes. I also see that when I re-start Cassandra the
> performance goes back to normal and then again starts decreasing after some
> time.
> Some hunting landed me to this page
> which talks
> about the large data sets and explains that it might be because I am going
> through multiple layers of OS cache, but does not tell me how to tune it.
> So, my question is, are there any optimizations that I can do to handle
> these large datatasets ?
> and why does my performance go back to normal when I restart Cassandra ?
> Thanks !

Jonathan Ellis
Project Chair, Apache Cassandra