incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <jbel...@gmail.com>
Subject Re: Memory leak with Sun Java 1.6 ?
Date Sun, 12 Dec 2010 16:21:50 GMT
http://www.riptano.com/docs/0.6/troubleshooting/index#nodes-are-dying-with-oom-errors

On Sun, Dec 12, 2010 at 9:52 AM, Timo Nentwig <timo.nentwig@toptarif.de>wrote:

>
> On Dec 10, 2010, at 19:37, Peter Schuller wrote:
>
> > To cargo cult it: Are you running a modern JVM? (Not e.g. openjdk b17
> > in lenny or some such.) If it is a JVM issue, ensuring you're using a
> > reasonably recent JVM is probably much easier than to start tracking
> > it down...
>
> I had OOM problems with OpenJDK, switched to Sun/Oracle's recent 1.6.0_23
> and...still have the same problem :-\ Stack trace always looks the same:
>
> java.lang.OutOfMemoryError: Java heap space
>        at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
>        at java.nio.ByteBuffer.allocate(ByteBuffer.java:329)
>        at
> org.apache.cassandra.utils.FBUtilities.readByteArray(FBUtilities.java:261)
>        at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:76)
>        at
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:35)
>        at
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:129)
>        at
> org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:120)
>        at
> org.apache.cassandra.db.RowMutationSerializer.defreezeTheMaps(RowMutation.java:383)
>        at
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:393)
>        at
> org.apache.cassandra.db.RowMutationSerializer.deserialize(RowMutation.java:351)
>        at
> org.apache.cassandra.db.RowMutationVerbHandler.doVerb(RowMutationVerbHandler.java:52)
>        at
> org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:63)
>        at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>        at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>        at java.lang.Thread.run(Thread.java:636)
>
> I'm writing from 1 client with 50 threads to a cluster of 4 machines (with
> hector). With QUORUM and ONE 2 machines quite reliably will soon die with
> OOM. What may cause this? Won't cassandra block/reject when memtable is full
> and being flushed to disk but grow and if flushing to disk isn't fast enough
> will run out of memory?




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Mime
View raw message