cayenne-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Huss <>
Subject Re: Watch out for memory leaks with EhCache
Date Fri, 06 Dec 2019 21:09:37 GMT
Switching to the dev list:

I've had some time this week to revisit this issue with memory leaks,
especially when using the Local Query Cache. There are two separate issues
to address:

1) The lifetime of entries in the Local Query Cache exceeds their
availability, which is the life of their ObjectContext. Any cache that is
not expiring entries (or limiting them) will just leak this memory.

2) Cached query results will retain the ObjectContext they were fetched
into, which in turn may retain a much larger number of objects than
intended. For example. If you use a single ObjectContext to fetch 1 million
uncached objects along with 1 cached object, you will retain 1 million and
1 objects in memory rather than just 1. This is potentially an issue with
both the Shared and Local Query Caches.

Also, because the cached objects still reference the ObjectContext, it
appears that the context will not be garbage collected. So a simple attempt
to solve issue #1 by invalidating the Local Query Cache when an
ObjectContext is finalized doesn't work, because the context will never be

Possible Solutions:

One solution is to null out the ObjectContext on any objects that are
inserted into the Query Cache. This solves both problems above, and it
seems logical since when the objects are retrieved from the cache they will
be placed into a new context anyway. This should work, but the
implementation has been tricky. Handling the deep object graphs due to
prefetching makes this a bit more complicated. This is what I've be working
on most recently.

A different solution that fixes the issues with the Local Cache (but not
the Shared Cache) would be to separate the storage for these two caches and
use a new instance of the Local Cache for each ObjectContext so that their
lifetimes correspond exactly. This could be done inside the Query Cache
itself so that users wouldn't need to know. I'm imagining a CombinedCache
or something that combines two caches together into one, where the Shared
Cache entries go to a long-lived cache and the Local Cache entries go to a
small short-lived cache. This solution is easier to implement (less
invasive) and seems more natural to me since it ties these lifetimes

These two solutions are not mutually exclusive - they both could be done.

What Now?

I've taken a first stab at implementing both of these solutions and have
had some concerns raised about each of them [1] [2]. I'd like to implement
something to fix this problem directly in Cayenne rather than fixing it
only for myself. I'd love to hear any feedback or suggestions on this
before I go further down what might be the wrong road.


[1] (My local copy of this code
now differs fairly significantly from what is currently in the pull request)
[2] (I would change this code to
make the separate caches live inside a single CombinedCache now)

On Sat, Nov 30, 2019 at 2:28 AM Andrus Adamchik <>

> Hi,
> Wanted to mention a scenario when cache misconfiguration can lead to a
> hard-to-detect memory leak. More details are available in this Jira [1],
> but the short story is that when you are using JCache/EhCache, make sure
> that *all* the cache groups used in queries are *explicitly* configured
> with desired size limits in ehcache.xml [2], including an entry for a
> no-cache group scenario that corresponds to "cayenne.default.cache" cache
> name.
> Andrus
> [1]
> [2] <config
>         xmlns:xsi=''
>         xmlns=''
>         xsi:schemaLocation="
>     <!-- Used by Cayenne for queries without explicit cache groups. -->
>     <cache alias="cayenne.default.cache">
>         <expiry>
>             <ttl unit="minutes">1</ttl>
>         </expiry>
>         <heap>100</heap>
>     </cache>
>     <cache alias="my-cache-group">
>         <expiry>
>             <ttl unit="minutes">1</ttl>
>         </expiry>
>         <heap>10</heap>
>     </cache>
> </config>

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message