incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: hadoop inserts blow out heap
Date Fri, 14 Sep 2012 09:10:59 GMT
Hi Brian did you see my follow up questions here http://www.mail-archive.com/user@cassandra.apache.org/msg24840.html

Cheers

-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 12/09/2012, at 11:52 PM, Brian Jeltema <brian.jeltema@digitalenvoy.net> wrote:

> I'm a fairly novice Cassandra/Hadoop guy. I have written a Hadoop job (using the Cassandra/Hadoop
integration API)
> that performs a full table scan and attempts to populate a new table from the results
of the map/reduce. The read
> works fine and is fast, but the table insertion is failing with OOM errors (in the Cassandra
VM). The resulting heap dump from one node shows that
> 2.9G of the heap is consumed by a JMXConfigurableThreadPoolExecutor that appears to be
full of batch mutations.
> 
> I'm using a 6-node cluster, 32G per node, 8G heap, RF=3, if any of that matters.
> 
> Any suggestions would be appreciated regarding configuration changes or additional information
I might
> capture to understand this problem.
> 
> Thanks
> 
> Brian J


Mime
View raw message