Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0E9CECAF8 for ; Mon, 4 Nov 2013 02:29:08 +0000 (UTC) Received: (qmail 79518 invoked by uid 500); 4 Nov 2013 02:29:05 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 79465 invoked by uid 500); 4 Nov 2013 02:29:05 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 79457 invoked by uid 99); 4 Nov 2013 02:29:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Nov 2013 02:29:05 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of tsato@cloudian.com designates 209.85.223.177 as permitted sender) Received: from [209.85.223.177] (HELO mail-ie0-f177.google.com) (209.85.223.177) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 04 Nov 2013 02:29:01 +0000 Received: by mail-ie0-f177.google.com with SMTP id e14so11022302iej.8 for ; Sun, 03 Nov 2013 18:28:41 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=BNyg3JoFI3tIKndVmxJa4PEHHq0iOdhp7B6FKOPltAc=; b=GvzyTngF7tc5owINxDCBpYcCz+nBU2BMbDOjnA5tZfM+zj5fzGd/mI2P1DYRr+D2x2 t50zm3icxn80rnZCBGI3wOKSDloXVWkXhVEjkW8kSrVOWaesRwPtufpy0BVkAyQdAnQW N4RSlYvVHvlK95uVOfKLozkswg8pItFadOI4WS7d7/efHjqpQUJmjLStgyij2KbgKOx3 STWAFSSxp5ZuLsB+qK+BPnGY85ZDAgGnPNOzWu6GDYg2NQKRS5hpoEUaAjZEgoZjjbBF PxHYHvbasn07Zjkjpo965Dk4q66vWIcatv42o0YkZ69xPdIYcYS7f2cdyKSdwjPLRulr g0qA== X-Gm-Message-State: ALoCoQlBiTACHDd0Rm+Su24sbzwJdfIuLZi013UZxD8yWaNdG6Ktkv/4vrJQiw26JhD/zwUtUmDt MIME-Version: 1.0 X-Received: by 10.50.23.103 with SMTP id l7mr10344978igf.3.1383532120904; Sun, 03 Nov 2013 18:28:40 -0800 (PST) Received: by 10.64.23.5 with HTTP; Sun, 3 Nov 2013 18:28:40 -0800 (PST) In-Reply-To: References: Date: Mon, 4 Nov 2013 11:28:40 +0900 Message-ID: Subject: Re: Cass 1.1.11 out of memory during compaction ? From: Takenori Sato To: "user@cassandra.apache.org" Content-Type: multipart/alternative; boundary=089e0158aaba9a2c8b04ea50acd3 X-Virus-Checked: Checked by ClamAV on apache.org --089e0158aaba9a2c8b04ea50acd3 Content-Type: text/plain; charset=ISO-8859-1 Try increasing column_index_size_in_kb. A slice query to get some ranges(SliceFromReadCommand) requires to read all the column indexes for the row, thus could hit OOM if you have a very wide row. On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin wrote: > Cass 1.1.11 ran out of memory on me with this exception (see below). > > My parameters are 8gig heap, new gen is 1200M. > > ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 > AbstractCassandraDaemon.java (line 132) Exception in thread > Thread[ReadStage:55887,5,main] > java.lang.OutOfMemoryError: Java heap space > at org.apache.cassandra.io.util.RandomAccessReader.readBytes(RandomAccessReader.java:323) > > at org.apache.cassandra.utils.ByteBufferUtil.read( > ByteBufferUtil.java:398) > at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:380) > > at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:88) > > at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:83) > > at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:73) > > at org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:37) > > at org.apache.cassandra.db.columniterator.IndexedSliceReader$ > IndexedBlockFetcher.getNextBlock(IndexedSliceReader.java:179) > at org.apache.cassandra.db.columniterator.IndexedSliceReader. > computeNext(IndexedSliceReader.java:121) > at org.apache.cassandra.db.columniterator.IndexedSliceReader. > computeNext(IndexedSliceReader.java:48) > at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > > at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > > at org.apache.cassandra.db.columniterator. > SSTableSliceIterator.hasNext(SSTableSliceIterator.java:116) > at org.apache.cassandra.utils.MergeIterator$Candidate. > advance(MergeIterator.java:147) > at org.apache.cassandra.utils.MergeIterator$ManyToOne. > advance(MergeIterator.java:126) > at org.apache.cassandra.utils.MergeIterator$ManyToOne. > computeNext(MergeIterator.java:100) > at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140) > > at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135) > > at org.apache.cassandra.db.filter.SliceQueryFilter. > collectReducedColumns(SliceQueryFilter.java:117) > at org.apache.cassandra.db.filter.QueryFilter. > collateColumns(QueryFilter.java:140) > at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:292) > > at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64) > > at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1362) > > at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1224) > > at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1159) > > at org.apache.cassandra.db.Table.getRow(Table.java:378) > at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:69) > > at org.apache.cassandra.db.ReadVerbHandler.doVerb( > ReadVerbHandler.java:51) > at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:59) > > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > at java.lang.Thread.run(Thread.java:722) > > > Any thoughts ? > > This is a dual data center set up, with 4 nodes in each DC and RF=2 in > each. > > > -- > Regards, > Oleg Dulin > http://www.olegdulin.com > > > --089e0158aaba9a2c8b04ea50acd3 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
Try increasing column_index_size_in_kb.

A slice query to get some ranges(SliceFromReadCommand) requires to r= ead all the column indexes for the row, thus could hit OOM if you have a ve= ry wide row.



On Sun, Nov 3, 2013 at 11:54 PM, Oleg Dulin <oleg.dulin@g= mail.com> wrote:
Cass 1.1.11 ran out of memory on me with thi= s exception (see below).

My parameters are 8gig heap, new gen is 1200M.

ERROR [ReadStage:55887] 2013-11-02 23:35:18,419 AbstractCassandraDaemon.jav= a (line 132) Exception in thread Thread[ReadStage:55887,5,main]
java.lang.OutOfMemoryError: Java heap space
=A0 =A0 =A0 =A0at org.apache.cassandra.io.util.RandomAccessReader.re= adBytes(RandomAccessReader.java:323)
=A0 =A0 =A0 =A0at org.apache.cassandra.utils.ByteBufferUtil.read(= ByteBufferUtil.java:398)
=A0 =A0 =A0 =A0at org.apache.cassandra.utils.ByteBufferUtil.r= eadWithShortLength(ByteBufferUtil.java:380)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnSerializer.deseriali= ze(ColumnSerializer.java:88)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnSerializer.deseriali= ze(ColumnSerializer.java:83)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnSerializer.deseriali= ze(ColumnSerializer.java:73)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnSerializer.deseriali= ze(ColumnSerializer.java:37)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.columniterator.Inde= xedSliceReader$IndexedBlockFetcher.getNextBlock(Indexe= dSliceReader.java:179)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.columniterator.Inde= xedSliceReader.computeNext(IndexedSliceReader.java:121)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.columniterator.Inde= xedSliceReader.computeNext(IndexedSliceReader.java:48)
=A0 =A0 =A0 =A0at com.google.common.collect.AbstractIterator.= tryToComputeNext(AbstractIterator.java:140)
=A0 =A0 =A0 =A0at com.google.common.collect.AbstractIterator.hasNext= (AbstractIterator.java:135)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.columniterator.SSTa= bleSliceIterator.hasNext(SSTableSliceIterator.java:116)
=A0 =A0 =A0 =A0at org.apache.cassandra.utils.MergeIterator$Candidate= .advance(MergeIterator.java:147)
=A0 =A0 =A0 =A0at org.apache.cassandra.utils.MergeIterator$ManyToOne= .advance(MergeIterator.java:126)
=A0 =A0 =A0 =A0at org.apache.cassandra.utils.MergeIterator$ManyToOne= .computeNext(MergeIterator.java:100)
=A0 =A0 =A0 =A0at com.google.common.collect.AbstractIterator.= tryToComputeNext(AbstractIterator.java:140)
=A0 =A0 =A0 =A0at com.google.common.collect.AbstractIterator.hasNext= (AbstractIterator.java:135)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.filter.SliceQueryFilter.collectReducedColumns(SliceQueryFilter.java:117)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.filter.QueryFilter.= collateColumns(QueryFilter.java:140)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.CollationController.
collectAllData(CollationController.java:292)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:64)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnFamilyStore.g= etTopLevelColumns(ColumnFamilyStore.java:1362)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnFamilyStore.g= etColumnFamily(ColumnFamilyStore.java:1224)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ColumnFamilyStore.g= etColumnFamily(ColumnFamilyStore.java:1159)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.Table.getRow(Table.java:37= 8)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.SliceFromReadCommand.getRo= w(SliceFromReadCommand.java:69)
=A0 =A0 =A0 =A0at org.apache.cassandra.db.ReadVerbHandler.doVerb(= ReadVerbHandler.java:51)
=A0 =A0 =A0 =A0at org.apache.cassandra.net.MessageDeliveryTask.run(Mes= sageDeliveryTask.java:59)
=A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor.runWorker(= ThreadPoolExecutor.java:1145)
=A0 =A0 =A0 =A0at java.util.concurrent.ThreadPoolExecutor$Worker.run= (ThreadPoolExecutor.java:615)
=A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:722)


Any thoughts ?

This is a dual data center set up, with 4 nodes in each DC and RF=3D2 in ea= ch.


--
Regards,
Oleg Dulin
http://www.olegdulin= .com



--089e0158aaba9a2c8b04ea50acd3--