cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pardeep <>
Subject Cassandra 2.1.1 Out of Memory Errors
Date Sun, 16 Nov 2014 19:47:30 GMT
I'm running a 4 node cluster with RF=3, CL of QUORUM for writes and ONE for 
reads. Each node has 3.7GB RAM with 32GB SSD HD, commitlog is on 
another HD. Currently each node has about 12GB of data. Cluster is always 
normal unless repair happens, that's when some nodes go to medium health in 
terms of OpsCenter.


I've looked everywhere to get info on what might be causing these errors but 
no luck. Can anyone please guide me to what I should look at or tweak to get 
around these errors?

All column families are using SizeTieredCompactionStrategy, I've thought 
about moving to LeveledCompactionStrategy since Cassandra is running on 
SSD but haven't made the move yet.

All writes are write once, data is rarely updated and no TTL columns. I do use 
wide rows that can span few thousands to a few million, I'm not sure if range 
slices happen using the memory.

Let me know if further info is needed. I do have hproc files but those are about 
3.2 GB in size.

java.lang.OutOfMemoryError: Java heap space<init>

View raw message