geode-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <>
Subject [jira] [Commented] (GEODE-3286) Failing to cleanup connections from ConnectionTable receiver table
Date Wed, 26 Jul 2017 17:48:01 GMT


ASF GitHub Bot commented on GEODE-3286:

Github user galen-pivotal commented on a diff in the pull request:
    --- Diff: geode-core/src/main/java/org/apache/geode/internal/tcp/ ---
    @@ -1322,6 +1328,14 @@ private void createBatchSendBuffer() {
    +  public void onIdleCancel() {
    --- End diff --
    Should this be closing and cleaning up the connection as well?

> Failing to cleanup connections from ConnectionTable receiver table
> ------------------------------------------------------------------
>                 Key: GEODE-3286
>                 URL:
>             Project: Geode
>          Issue Type: Bug
>          Components: membership
>            Reporter: Brian Rowe
> This bug tracks gemfire issue 1554 (
> Hello team,
> A customer (VMWare) is experiencing several {{OutOfMemoryError}} on production servers,
and they believe there's a memory leak within GemFire.
> Apparently 9.5GB of the heap heap is occupied by 487,828 instances of {{}},
and 7.7GB of the heap is occupied by 487,804 instances of {{}},
both referenced from the {{receivers}} attribute within the {{ConnectionTable}} class. I got
this information from the Eclipse Memory Analyzer plugin, the images are attached.
> Below are some OQLs that I was able to run within the plugin, it is weird that the collection
of receivers is composed of 486.368 elements...
> {code}
> SELECT * FROM com.gemstone.gemfire.internal.tcp.ConnectionTable
> 	-> 1
> SELECT receivers.size FROM com.gemstone.gemfire.internal.tcp.ConnectionTable 
> 	-> 486.368
> SELECT * FROM com.gemstone.gemfire.internal.tcp.Connection
> 	-> 487.758
> SELECT * FROM com.gemstone.gemfire.internal.tcp.Connection con WHERE con.stopped = true
> 	-> 486.461
> SELECT * FROM com.gemstone.gemfire.internal.tcp.Connection con WHERE con.stopped = false
> 	-> 1297
> {code}
> That said, nothing in the statistics (maybe there's something, but I can't find it...)
seems to point to a spike in the amount of entries within the regions, neither in the current
amount of connections, nor anything to be able to explain the continuous drop of the available
heap over time (chart#freeMemory).
> The heap dump (approximately 20GB) and the statistics (don't have logs yet, but they
might not be required by looking at the heap and the statistics) have been uploaded to [Google
> Just for the record, apparently we delivered to them a year and half ago as a
fix to [GEM-94|] / [GEODE-332|],
they've been running fine since then, until now. The last change in the {{ConnectionTable}}
was done to fix these issues, so if there's actually a bug within the class, it will also
exist on 8.2.5 (just a reminder to change the affected version field if required).
> The issue is not reproducible at will but happens in several of their environments, yet
I haven't been able to reproduce it in my lab environment for now.
> Please let me know if you need anything else to proceed.
> Best regards.

This message was sent by Atlassian JIRA

View raw message