incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: hadoop inserts blow out heap
Date Thu, 13 Sep 2012 21:13:35 GMT
What version of Cassandra are you using ?

> 2.9G of the heap is consumed by a JMXConfigurableThreadPoolExecutor that appears to be
full of batch mutations.
That sounds like writes were backing up and/or blocked for some reason. Check the logs for
errors.

Notetool tpstats will show you if mutations are pending, and will say how many have been dropped.


The log may also contain entries about pending and dropped messages.

Hope that helps.

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 12/09/2012, at 11:52 PM, Brian Jeltema <brian.jeltema@digitalenvoy.net> wrote:

> I'm a fairly novice Cassandra/Hadoop guy. I have written a Hadoop job (using the Cassandra/Hadoop
integration API)
> that performs a full table scan and attempts to populate a new table from the results
of the map/reduce. The read
> works fine and is fast, but the table insertion is failing with OOM errors (in the Cassandra
VM). The resulting heap dump from one node shows that
> 2.9G of the heap is consumed by a JMXConfigurableThreadPoolExecutor that appears to be
full of batch mutations.
> 
> I'm using a 6-node cluster, 32G per node, 8G heap, RF=3, if any of that matters.
> 
> Any suggestions would be appreciated regarding configuration changes or additional information
I might
> capture to understand this problem.
> 
> Thanks
> 
> Brian J


Mime
View raw message