cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rahul Reddy <>
Subject "Maximum memory usage reached (512.000MiB), cannot allocate chunk of 1.000MiB"
Date Sun, 01 Dec 2019 15:39:30 GMT

We are seeing memory usage reached 512 mb and cannot allocate 1MB.  I see
this because file_cache_size_mb by default set to 512MB.

Datastax document recommends to increase the file_cache_size.

We have 32G over all memory allocated 16G to Cassandra. What is the
recommended value in my case. And also when does this memory gets filled up
frequent does nodeflush helps in avoiding this info messages?

View raw message