incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Janne Jalkanen <janne.jalka...@ecyrd.com>
Subject Re: Why does cassandra PoolingSegmentedFile recycle the RandomAccessReader?
Date Mon, 15 Jul 2013 08:02:52 GMT

I had exactly the same problem, so I increased the sstable size (from 5 to 50 MB - the default
5MB is most certainly too low for serious usecases).  Now the number of SSTableReader objects
is manageable, and my heap is happier.

Note that for immediate effect I stopped the node, removed the *.json files and restarted
- which put all SSTables to L0, which meant a weekend full of compactions… Would be really
cool if there was a way to automatically drop all LCS SSTables one level down to make them
compact earlier without avoiding the "OMG-must-compact-everything-aargh-my-L0-is-full" -effect
of removing the JSON file.

/Janne

On 15 Jul 2013, at 10:48, sulong <sulong1984@gmail.com> wrote:

> Why does cassandra PoolingSegmentedFile recycle the RandomAccessReader? The RandomAccessReader
objects consums too much memory.
> 
> I have a cluster of 4 nodes. Every node's cassandra jvm has 8G heap. The cassandra's
memory is full after about one month, so I have to restart the 4 nodes every month. 
> 
> I have 100G data on every node, with LevedCompactionStrategy and 10M sstable size, so
there are more than 10000 sstable files. By looking through the heap dump file, I see there
are more than 9000 SSTableReader objects in memory, which references lots of  RandomAccessReader
objects. The memory is consumed by these RandomAccessReader objects. 
> 
> I see the PoolingSegementedFile has a recycle method, which puts the RandomAccessReader
to a queue. Looks like the Queue always grow until the sstable is compacted.  Is there any
way to stop the RandomAccessReader recycling? Or, set a limit to the recycled RandomAccessReader's
number?
> 
> 


Mime
View raw message