cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Trung Tran <>
Subject Re: Newbie question on Cassandra mem usage
Date Mon, 22 Nov 2010 19:14:52 GMT

Thanks for the guideline. I did not turn up any memory setting, the
nodes are configured with all default settings (except for disk-access
is using nmap). I have 3 nodes with 1 client using hector, 8 writing
threads. There are 3 CF, 1 standard and 2 super.


On Mon, Nov 22, 2010 at 11:00 AM, Aaron Morton <> wrote:
> The higher memory usage for the java process may be because of memory mapped
> file access, take a look at the disk_access_mode in cassandra.yaml
> WRT going OutOfMemory:
> - what are your Memtable thresholds in cassandra.yaml ?
> - how many Column Families do you have?
> - What are your row and key cache settings?
> - Have a read of JVM HeapSize section
> here
> - Have a read
> of
> In short, if you've turned up any memory settings turn them down. Run your
> test again and see if it completes. Then turn them up a little at a time.
> If you're still having trouble include some details of your cassandra.yaml
> file and the schema definition next time. As well as how many cassandra
> nodes you have, how many clients you are running against it and how fast
> they are writing.
> Aaron
> On 23 Nov, 2010,at 07:45 AM, Trung Tran <> wrote:
> Hi,
> I have a test cluster of 3 nodes, 14Gb of mem in each node,
> replication factor = 3. With default -Xms and Xmx, my nodes are set to
> have max-heap-size = 7Gb. After initial load with about 200M rows
> (write with hector default consistencylevel = quorum,) my nodes memory
> usage are up to 13.5Gb, show a bunches of GC notifications and
> eventually crashes with java.lang.OutOfMemoryError: Java heap space.
> Is there any setting that can help with this scenario?
> Thanks,
> Trung.

View raw message