cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mokkapati, Bhargav (Nokia - IN/Chennai)" <bhargav.mokkap...@nokia.com>
Subject Maximum memory usage reached in cassandra!
Date Tue, 28 Mar 2017 07:01:55 GMT
Hi Cassandra users,

I am getting "Maximum memory usage reached (536870912 bytes), cannot allocate chunk of 1048576
bytes" . As a remedy I have changed the off heap memory usage limit cap i.e file_cache_size_in_mb
parameter in cassandra.yaml from 512 to 1024.

But now again the increased limit got filled up and throwing a message "Maximum memory usage
reached (1073741824 bytes), cannot allocate chunk of 1048576 bytes"

This issue occurring when redistribution of index's happening ,due to this Cassandra nodes
are still UP but read requests are getting failed from application side.

My configuration details are as below:

5 node cluster , each node with 68 disks, each disk is 3.7 TB

Total CPU cores - 8

total  Mem:    377G
used      265G
free       58G
shared  378M
buff/cache 53G
available 104G

MAX_HEAP_SIZE is 4GB
file_cache_size_in_mb: 1024

memtable heap space is commented in yaml file as below:
# memtable_heap_space_in_mb: 2048
# memtable_offheap_space_in_mb: 2048

Can anyone please suggest the solution for this issue. Thanks in advance !

Thanks,
Bhargav M





Mime
View raw message