cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stone (JIRA)" <>
Subject [jira] [Commented] (CASSANDRA-11460) memory leak
Date Sun, 10 Apr 2016 10:26:25 GMT


stone commented on CASSANDRA-11460:

this issue is not related to bugCASSANDRA-9549
out-of-memory caused by ConcurrentLinkedQueue,i think it is duplicate with bugCASSANDRA-9549,now
I understand I make a mistake.
Cassandra9549 caused by Ref.class,and resolved.
my issue caused by SEPExecutor.class
protected final ConcurrentLinkedQueue<FutureTask<?>> tasks = new ConcurrentLinkedQueue<>();

I just see tasks.add method,but not remove operation,so is this one of the reason?

ConcurrentLinkedQueue is non-blocking,and due to my poor cassandra cluster environment.
if it cannot consume as fast as produce,it will also increase and then that

> memory leak
> -----------
>                 Key: CASSANDRA-11460
>                 URL:
>             Project: Cassandra
>          Issue Type: Bug
>            Reporter: stone
>            Priority: Critical
>         Attachments: aaa.jpg
> env:
> cassandra3.3
> jdk8
> 8G Ram
> so set
> 1.met same problem about this:
> I confuse about that this was fixed in release 3.3 according this page:
> so I change to 3.4,and also have  found this problem again 
> I think this fix should be included in
> can you explain about this?
> 2.our write rate exceed the value that our cassandra env can support,
> but i think it should descrese the write rate,or block.consumer the writed data,keep
the memory down,then go on writing,not cause out-of-memory instead.

This message was sent by Atlassian JIRA

View raw message