cayenne-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Huss <johnth...@gmail.com>
Subject Re: Watch out for memory leaks with EhCache
Date Mon, 23 Dec 2019 15:09:09 GMT
On Sun, Dec 22, 2019 at 5:47 AM Andrus Adamchik <andrus@objectstyle.org>
wrote:

>
>
> > On Dec 11, 2019, at 11:18 PM, John Huss <johnthuss@gmail.com> wrote:
> >
> > My use case is very limited in scope. I want to have fresh data basically
> > all the time, but not fetch the same data twice in the same request. Once
> > the request is over and the request's context is out of scope, then the
> > local cache for that context can be cleared. So this cached data is
> > extremely short lived, and as a consequence the size of it doesn't really
> > matter (though I'm only caching small query results anyway).
>
> Thanks for clarifying. I've dealt a lot with "request-scope cache"
> scenarios. Here is my approach:
>
> 1. Set the lowest reasonable upper limit on the number of entries in the
> main cache (parent of local caches). E.g. 3x of your average requests / min
> or something.
> 2. Set fairly short expiration times. E.g. 10x of your average response
> time (say 10-30 sec).
>
> (1) ensures quick turn and prevents leaks
> (2) ensures your request doesn't fetch the same data twice, and will not
> result in any stale data, as every new context will start its own "region"
> for the local cache
>
> So you may end up using a bit more memory than absolutely needed, but you
> avoid the leaks, and get the desired freshness/caching balance.
>
> > I've narrowed it down - the "extra" memory being retained by the
> > ObjectContext is: entries in ObjectStore.objectMap - but not DataObjects
> > themselves (those get cleared), it's just the references to those
> objects:
> > the mapping from ObjectId -> WeakReference. These entries stay present
> > after the WeakReference is cleared. All those ObjectIds (though small)
> add
> > up to a significant amount of memory over time.
>
> Good catch. DataContext getting stuck in the local cache does seem to
> leave a lot of unintended garbage floating around. While it doesn't change
> the equation above, it is still a waste that we may need to address.
>
> So to me the approach above is "good enough". Though we may still come up
> with a design that allows for more control (and hence the lowest possible
> memory footprint). I suspect any such solution would require
> ObjectContext.close() method to allow the user to delineate context scope.
>

 I've decided to try using a simple QueryCache subclass I'm calling
PartitionedQueryCache
<https://gist.github.com/johnthuss/c5f6b6ce76036afd028199da312ddb08> which
will enable independent lifetimes for the shared cache versus local cache.
I'm going to run with this for a while. If there is any interest I can
contribute it.

Thanks!

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message