cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Hossein Ghiyasi Mehr <>
Subject Re: "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"
Date Sun, 01 Dec 2019 16:50:25 GMT
Chunks are part of sstables. When there is enough space in memory to cache
them, read performance will increase if application requests it again.

Your real answer is application dependent. For example write heavy
applications are different than read heavy or read-write heavy. Real time
applications are different than time series data environments and ... .

On Sun, Dec 1, 2019 at 7:09 PM Rahul Reddy <> wrote:

> Hello,
> We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
> this because file_cache_size_mb by default set to 512MB.
> Datastax document recommends to increase the file_cache_size.
> We have 32G over all memory allocated 16G to Cassandra. What is the
> recommended value in my case. And also when does this memory gets filled up
> frequent does nodeflush helps in avoiding this info messages?

View raw message