cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Morton <aa...@thelastpickle.com>
Subject Re: Newbie question on Cassandra mem usage
Date Mon, 22 Nov 2010 19:00:18 GMT
The higher memory usage for the java process may be because of memory mapped file access, take
a look at the disk_access_mode in cassandra.yaml. 

WRT going OutOfMemory:
- what are your Memtable thresholds in cassandra.yaml ? 
- how many Column Families do you have? 
- What are your row and key cache settings?
- Have a read of JVM HeapSize section here http://wiki.apache.org/cassandra/MemtableThresholds
- Have a read of http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts

In short, if you've turned up any memory settings turn them down. Run your test again and
see if it completes. Then turn them up a little at a time. 

If you're still having trouble include some details of your cassandra.yaml file and the schema
definition next time. As well as how many cassandra nodes you have, how many clients you are
running against it and how fast they are writing. 

Aaron


On 23 Nov, 2010,at 07:45 AM, Trung Tran <tran.hieutrung@gmail.com> wrote:

Hi,

I have a test cluster of 3 nodes, 14Gb of mem in each node,
replication factor = 3. With default -Xms and Xmx, my nodes are set to
have max-heap-size = 7Gb. After initial load with about 200M rows
(write with hector default consistencylevel = quorum,) my nodes memory
usage are up to 13.5Gb, show a bunches of GC notifications and
eventually crashes with java.lang.OutOfMemoryError: Java heap space.

Is there any setting that can help with this scenario?

Thanks,
Trung.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
    • Unnamed multipart/related (inline, None, 0 bytes)
View raw message