cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jonathan Ellis (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (CASSANDRA-2273) Possible Memory leak
Date Fri, 04 Mar 2011 21:23:45 GMT

     [ https://issues.apache.org/jira/browse/CASSANDRA-2273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Jonathan Ellis resolved CASSANDRA-2273.
---------------------------------------

       Resolution: Not A Problem
    Fix Version/s:     (was: 0.7.4)

That's a reasonable idea.  My only worry is that (like the memtable throughput setting) people
don't understand that the size-in-memory is about 8x the size-on-disk, which is the number
we store in the sstable.

It's even more complicated than that actually because you could have 4 versions of the row
each 4MB large -- the merged row could be 4M, 16MB, or anything in between, and you don't
know until you merge it.

> Possible Memory leak
> --------------------
>
>                 Key: CASSANDRA-2273
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2273
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>            Reporter: S├ębastien Giroux
>            Priority: Critical
>         Attachments: heap_peak_OOM.PNG, jconsole_OOM.PNG
>
>
> I have a few problematic nodes in my cluster that will crash OutOfMemory very often.
This is Cassandra 0.7.3 downloaded from Hudson.
> Heap size is 6GB, server memory is 8GB.
> Memtable are flushed at 64MB, I have 5 CFs.
> FlushLargestMemtablesAt is set at 0.8 but doesn't help with this issue.
> I will attach a screenshot showing my issue. There is no compaction going on when the
heap usage start increasing like crazy.
> It could be a configuration issue but it kinda looks like a bug to me.

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

Mime
View raw message