cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ariel Weisberg (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (CASSANDRA-10688) Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
Date Thu, 26 Nov 2015 01:27:10 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-10688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15027960#comment-15027960
] 

Ariel Weisberg edited comment on CASSANDRA-10688 at 11/26/15 1:26 AM:
----------------------------------------------------------------------

Proposed change. [~benedict] when you are available you will probably want to review this.

The search is iterative and you can set the maximum depth and # of visited objects via system
properties. The search records a set of all visited objects so it's a good idea to bound the
amount of space that can be used. Right now the maximum depth defaults to 128 and the maximum
# of objects visited is 100k.

|[3.0 code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-10688-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10688-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10688-3.0-dtest/]|


was (Author: aweisberg):
Proposed change. [~benedict] when you are available you will probably want to review this
at some point.

The search is iterative and you can set the maximum depth and # of visited objects via system
properties. The search records a set of all visited objects so it's a good idea to bound the
amount of space that can be used. Right now the maximum depth defaults to 128 and the maximum
# of objects visited is 100k.

|[3.0 code|https://github.com/apache/cassandra/compare/cassandra-3.0...aweisberg:CASSANDRA-10688-3.0?expand=1]|[utests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10688-3.0-testall/]|[dtests|http://cassci.datastax.com/view/Dev/view/aweisberg/job/aweisberg-CASSANDRA-10688-3.0-dtest/]|

> Stack overflow from SSTableReader$InstanceTidier.runOnClose in Leak Detector
> ----------------------------------------------------------------------------
>
>                 Key: CASSANDRA-10688
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-10688
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: Jeremiah Jordan
>            Assignee: Ariel Weisberg
>             Fix For: 3.0.1, 3.1
>
>
> Running some tests against cassandra-3.0 9fc957cf3097e54ccd72e51b2d0650dc3e83eae0
> The tests are just running cassandra-stress write and read while adding and removing
nodes from the cluster.  After the test runs when I go back through logs I find the following
Stackoverflow fairly often:
> ERROR [Strong-Reference-Leak-Detector:1] 2015-11-11 00:04:10,638  Ref.java:413 - Stackoverflow
[private java.lang.Runnable org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier.runOnClose,
final java.lang.Runnable org.apache.cassandra.io.sstable.format.SSTableReader$DropPageCache.andThen,
final org.apache.cassandra.cache.InstrumentingCache org.apache.cassandra.io.sstable.SSTableRewriter$InvalidateKeys.cache,
private final org.apache.cassandra.cache.ICache org.apache.cassandra.cache.InstrumentingCache.map,
private final com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap org.apache.cassandra.cache.ConcurrentLinkedHashCache.map,
final com.googlecode.concurrentlinkedhashmap.LinkedDeque com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap.evictionDeque,
com.googlecode.concurrentlinkedhashmap.Linked com.googlecode.concurrentlinkedhashmap.LinkedDeque.first,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node 
> ....... (repeated a whole bunch more) .... 
> com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.next,
final java.lang.Object com.googlecode.concurrentlinkedhashmap.ConcurrentLinkedHashMap$Node.key,
public final byte[] org.apache.cassandra.cache.KeyCacheKey.key



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message