cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stone (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CASSANDRA-11460) memory leak
Date Sun, 10 Apr 2016 10:26:25 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-11460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234049#comment-15234049
] 

stone commented on CASSANDRA-11460:
-----------------------------------

this issue is not related to bugCASSANDRA-9549
Summary:
out-of-memory caused by ConcurrentLinkedQueue,i think it is duplicate with bugCASSANDRA-9549,now
I understand I make a mistake.
Cassandra9549 caused by Ref.class,and resolved.
my issue caused by SEPExecutor.class
protected final ConcurrentLinkedQueue<FutureTask<?>> tasks = new ConcurrentLinkedQueue<>();

I just see tasks.add method,but not remove operation,so is this one of the reason?

ConcurrentLinkedQueue is non-blocking,and due to my poor cassandra cluster environment.
if it cannot consume as fast as produce,it will also increase and then Out-of-Memory.is that
right

> memory leak
> -----------
>
>                 Key: CASSANDRA-11460
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-11460
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: stone
>            Priority: Critical
>         Attachments: aaa.jpg
>
>
> env:
> cassandra3.3
> jdk8
> 8G Ram
> so set
> MAX_HEAP_SIZE="2G"
> HEAP_NEWSIZE="400M"
> 1.met same problem about this:
> https://issues.apache.org/jira/browse/CASSANDRA-9549
> I confuse about that this was fixed in release 3.3 according this page:
> https://github.com/apache/cassandra/blob/trunk/CHANGES.txt
> so I change to 3.4,and also have  found this problem again 
> I think this fix should be included in 3.3.3.4
> can you explain about this?
> 2.our write rate exceed the value that our cassandra env can support,
> but i think it should descrese the write rate,or block.consumer the writed data,keep
the memory down,then go on writing,not cause out-of-memory instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message