cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hunsberger, Peter" <Peter.Hunsber...@stjude.org>
Subject RE: [RT] Adaptive Caching
Date Fri, 18 Jul 2003 14:27:22 GMT
Geoff Howard <cocoon@leverageweb.com> writes:

<snip on questions for Stephano which I think I might be able to answer,
but I don't want to put words into his mouth and I'm too lazy at the
moment anyway/>

> > At first it would seem that if there is no way to determine the 
> > ergodic period of a fragment there is no reason to cache 
> it!  However, 
> > there is an alternative method of using the cache (which 
> Geoff Howard 
> > has been working on) which is to have an event invalidated 
> cache.  In 
> > this model cache validity is determined by some event 
> external to the 
> > production of the cached fragment and the cached fragment has no 
> > natural ergodic period.  Such fragments still fit mostly within the 
> > model given here: although we do not know when the external 
> event may 
> > transpire we can still determine that it is more efficient to 
> > regenerate the fragment from scratch than retain it in cache.
> 
> Another interesting thing about this kind of setup is that if 
> you commit 
> to it, you could get out of all validity calculations all 
> together.  If 
> it's still in the cache, serve it.  I will be experimenting 
> with this to 
> see if that gets any benefit in practice.
 
Yes that makes sense, in our implementation we use delta time and event.
The only reason for delta time is as a poor man's replacement for cache
cost calculations:  if the item is requested again (after delta time
expiry) we pay the cost, but otherwise we don't consume cache
unnecessarily.  

> > If a cache invalidating event transpires then, for such 
> fragments, it 
> > may also make sense to push the new version of the fragment 
> into the 
> > cache at that time.  Common use cases might be CMSs where 
> authoring or 
> > editing events are expensive and rare (eg. regen Javadoc).  In our 
> > case, we have a large set of metadata that is expensive to generate 
> > but rarely updated.  This metadata is global across all 
> users and if 
> > there are resources available we want it in the cache.
> > 
> > This points out that in order to push something into cache 
> one wants 
> > to make the same calculation as the cache manager would 
> make to expire 
> > it from cache; is it more efficient to push a new version 
> of this now?  
> > If not there may eventually be a pull request at which point the 
> > normal cache evaluation will determine how long to keep the new 
> > fragment cached.
> 
> This would be better IMHO if it was left to the cache's discretion to 
> cache the pushed update or not.  If it was currently cached, it would 
> make sense but otherwise not.  For instance, if I update an 
> entire table 
> with rows which never get requested, you wouldn't want them 
> pushed into 
> the cache especially at the expense of more valuable entries.

Yes, that also makes sense; you don't need the calculation performed by
the pusher if you have a good interface for doing the pushing.
Essentially you just want to be able to trigger a pseudo page request as
a result of a event.  Then the normal pipeline can take care of the rest
of the calculations as before.  However, there's one difference there's
no need to complete the pseudo request if at any point you determine the
results shouldn't be cached...



Mime
View raw message