cayenne-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Huss <>
Subject Re: Object cache - shared vs local
Date Tue, 27 Jun 2017 21:45:19 GMT
Thanks Andrus, that was very helpful.

I've never seen much documentation on how caching works, particularly for
the Object caches.  So I'm going to write what I've learned here and maybe
it can help someone else. If there is anything amiss please correct me.


The "Shared Object Cache" is a DataRowStore (snapshot cache) that can be
consulted by EVERY ObjectContext to fulfill requests primarily for
relationship faults (lazy loading). If the shared Object cache is enabled
then any fired fault will end up in the cache and will be available to
every ObjectContext in the app that may fire the fault in the future. It
won't refetch the row from the database then unless you have explicitly
requested a refresh (or an explicit prefetch). However, the size of the
cache is limited by the Domain configuration (in Cayenne Modeler) property
"size of object cache". So rows will be purged from the cache if it gets
full.  This requires caution since you can't count on any shared data
staying put.  Making the cache extremely large may avoid having your data
evicted, but will waste memory as every object you fetch without an
explicit Local cache strategy ends up in this cache.

The "Local Object Cache" is an ObjectStore that each ObjectContext has a
separate instance of. It is tied directly to an individual ObjectContext.
This allows you to hold on to data that you've prefetched into the context
(or explicitly fetched and lost reference to) but haven't accessed
otherwise yet. It disappears when the ObjectContext disappears. It is
actually backed by it's own DataRowStore, which is inconsequential EXCEPT
for the fact that this means it also it affected by the "size of object
cache" property given above. If this size is smaller than the numbers of
rows returned by a single query plus prefetches it will ignore your
prefetched data and fault in these relationships one at a time.

My recommendations for users are to:
1) Disable the Shared Object Cache. Otherwise you'll have to be vigilant to
avoid stale data in cases where you forgot a prefetch to refresh related
data. It's better to over-fetch than to return invalid (stale) data. That
makes it a performance problem instead of correctness problem, which is a
better default behavior.

If you ARE going to use the Shared Object Cache, then you should use a
Local cache strategy on all your queries that you don't want to end up in
the Shared cache, and you should be very careful to prefetch every
relationship that you need to be fresh.

2) After you've disabled the Shared Object cache, set the size of the
Object cache (this is just for the *Local* Object Cache now) to MAX_INT
(2147483647). Otherwise you risk having Cayenne ignore data that you have
explicitly prefetched and having it fall back to horrendous
one-row-at-a-time fetches. A separate cache is created for each
ObjectContext and only lives as only as the context does, so you don't
really have to worry about the potentially large size of the cache as long
as your contexts are all short-lived.

3) In the small number of cases where you actually WANT to have a shared
cache (like for read-only lookup tables) you can implement a
DataChannelFilter to act as a cache for specific entities. This will ensure
that reads of relationships to these lookup tables will always hit the
cache. This takes a bit of effort, but it works.


The Query cache
is completely separate from both the Shared Object Cache and the Local
Object Cache. However it also has Shared and Local versions, where Shared
query results are available to EVERY ObjectContext and Local query results
are only available to a single ObjectContext. If you are using the query
cache you should set it up explicitly with a custom cache provider like
EhCache. While there may a learning curve for your cache provider,
Cayenne's behavior with the cache won't surprise you - the configuration is
up to you.

A "Local" query cache with no expiration (no cache group) is useful as a
way to treat an explicit query like a relationship fault since the query
will only be executed once during an ObjectContext's lifetime.

A "Shared" query cache is useful for data where you want to avoid having to
fetch every single time and where some amount of staleness is ok. The
cached query result can be figured to expire after a fixed time period or
when a triggering event occurs.

On Tue, Jun 27, 2017 at 9:39 AM Andrus Adamchik <>

> > On Jun 27, 2017, at 10:14 AM, John Huss <> wrote:
> >
> > I would be ok with disabling the shared cache except for a few entities.
> Is
> > there a way with listeners to intercept queries for specific entities and
> > return something manually and skip the query?
> DataChannelFilter.onQuery(..) theoretically can do that.
> > Using the query cache is great, except if you are firing relationships
> that
> > weren't prefetched -- in that case you can't avoid using the shared cache
> > and getting stale data.
> True. You will have to guess upfront which relationships are needed if
> cache control is a concern.
> Andrus

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message