cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Reddy <>
Subject Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"
Date Mon, 02 Dec 2019 14:48:34 GMT
Thanks Hossein,

How does the chunks are moved out of memory (LRU?) if it want to make room
for new requests to get chunks?if it has mechanism to clear chunks from
cache what causes to cannot allocate chunk? Can you point me to any

On Sun, Dec 1, 2019, 12:03 PM Hossein Ghiyasi Mehr <>

> Chunks are part of sstables. When there is enough space in memory to cache
> them, read performance will increase if application requests it again.
> Your real answer is application dependent. For example write heavy
> applications are different than read heavy or read-write heavy. Real time
> applications are different than time series data environments and ... .
> On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <>
> wrote:
>> Hello,
>> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
>> this because file_cache_size_mb by default set to 512MB.
>> Datastax document recommends to increase the file_cache_size.
>> We have 32G over all memory allocated 16G to Cassandra. What is the
>> recommended value in my case. And also when does this memory gets filled up
>> frequent does nodeflush helps in avoiding this info messages?

View raw message