cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hossein Ghiyasi Mehr <>
Subject Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"
Date Mon, 02 Dec 2019 15:21:41 GMT
It may be helpful:
It's complex. Simple explanation, cassandra keeps sstables in memory based
on chunk size and sstable parts. It manage loading new sstables to memory
based on requests on different sstables correctly . You should be worry
about it (sstables loaded in memory)

* - A Total Solution for Data Gathering & Analysis*

On Mon, Dec 2, 2019 at 6:18 PM Rahul Reddy <> wrote:

> Thanks Hossein,
> How does the chunks are moved out of memory (LRU?) if it want to make room
> for new requests to get chunks?if it has mechanism to clear chunks from
> cache what causes to cannot allocate chunk? Can you point me to any
> documention?
> On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <>
> wrote:
>> Chunks are part of sstables. When there is enough space in memory to
>> cache them, read performance will increase if application requests it again.
>> Your real answer is application dependent. For example write heavy
>> applications are different than read heavy or read-write heavy. Real time
>> applications are different than time series data environments and ... .
>> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <>
>> wrote:
>>> Hello,
>>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I
>>> see this because file_cache_size_mb by default set to 512MB.
>>> Datastax document recommends to increase the file_cache_size.
>>> We have 32G over all memory allocated 16G to Cassandra. What is the
>>> recommended value in my case. And also when does this memory gets filled up
>>> frequent does nodeflush helps in avoiding this info messages?

View raw message