cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Huy Le <>
Subject Re: Out of control memory consumption
Date Wed, 09 Feb 2011 19:04:40 GMT
> To be clear: You are not talking about the size of the Java process in
> top, but the actual amount of heap used as reported by the JVM via
> jmx/jconsole/etc?
>  This is memory usage shows in JMX that we are talking about.

> Is the memory amount of memory that you consider high, the heap size
> just after a concurrent mark/sweep?
Memory usage grows overtime.

> Are you actually seeing OOM:s or are you restarting the node
> pre-emptively in response to seeing heap usage go up?
No OOM.  We pre-emptively restart it before it become unresponsive due to

> > And JVM memory allocation:          -Xms3G -Xmx3G
> Just FYI: So it is entirely expected that the JVM will be 3G (a bit
> higher) in size (even with standard I/O) and further that the amount
> of live data in the heap be approaching 3G. The concurrent mark/sweep
> GC won't trigger until the initial occupancy reaches the limit (if
> modern Cassandra with default settings).
Our CMS settings are:

        -XX:CMSInitiatingOccupancyFraction=35 \
        -XX:+UseCMSInitiatingOccupancyOnly \

> If you've got a 3 gig heap size and the other nodes stay at 500 mb,
> the question is why *don't* they increase in heap usage. Unless your
> 500 mb is the report of the actual live data set as evidenced by
> post-CMS heap usage.
What's considered to be "live data"?  If we clear caches, run flush on the
key space, shouldn't that free up memory?



> --
> / Peter Schuller

Huy Le
Spring Partners, Inc.

View raw message