cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Unico Hommes <un...@hippo.nl>
Subject Re: Event caching and CachedSource
Date Tue, 02 Mar 2004 13:58:27 GMT
Geoff Howard wrote:

> Unico Hommes wrote:
>
>> Geoff Howard wrote:
>>
>>> Unico Hommes wrote:
>>>
>>>> Hi gang :-)
>>>>
>>>> A drawback I have been running into lately with eventcache 
>>>> mechanism is that it lacks the ability to remove heavy processing 
>>>> from the critical path. An event will simply remove a set of cached 
>>>> pipelines from the cache completely. Making the subsequent request 
>>>> for such a pipeline potentialy very slow. In applications where 
>>>> isolation is not a requirement this is an unnecessary drawback.
>>>
>>>
>>>
>>> Below sounds interesting and good but I haven't understood how event 
>>> cache is related.  AFAICS the only difference with eventcache and 
>>> the other validity types is that for the others an invalid response 
>>> is found in cache, but not used because it is found invalid after 
>>> retrieval, but the event cache removes the entry at invalidation 
>>> time since it knows it will never be useful.  Both cases mean that 
>>> the next person to request that resource will have to wait for the 
>>> full generation.  Maybe because I've only glanced at the refresher 
>>> stuff?
>>>
>> I guess you are right that at the Cache level nothing really changes. 
>> I overlooked that fact. I will do some more research on what is 
>> required to accomplish that in the case of the Refresher, but my idea 
>> was that the cached response would be served until a newly generated 
>> one could replace the stale one. Since the Refresher talks to the 
>> Cache directly, given the correct Validity strategy it can exercise 
>> full control over it.
>
>
>
> So, stale entries are served until they can be regenerated?  I've 
> looked for this in the past (someone called it the "I'm Sorry" pattern 
> :) ) and at the time thought it might be better implemented by a 
> pluggable strategy at the pipeline execution level.  Currently we have:
>
> - Assemble Pipeline
> - Gather key from Pipeline
> - Check cache for key
> - If object for key found, check its validity
> - If valid, serve the cached response
> - Else, execute pipeline and serve it.
>
> the cache point pipeline, and the non-caching pipeline are other 
> implementations of different strategies, but are accomplished by 
> inheritance instead of composing a Strategy.  I haven't ever thought 
> it through carefully but it seems like making those last 5 steps (as a 
> group) a pluggable strategy would allow things like this "I'm Sorry" 
> pattern, as well as more powerful concepts like Stefano's proposed 
> adaptive cache.  Just raw thoughts at this point...


I see two things at stake in my use case. The strategy pattern as you 
call it (regular,inverted,'i'm sorry', adaptive,etc.) and the 
granularity of  objects in the cache. In my case it is very inefficient 
to only cache complete pipelines and I need to have multiple levels of 
caching to optimize performance: besides caching the complete pipeline, 
also the individual sources that compise a traversable generation.

I am not sure I understand what you mean with 'pluggable strategy'. 
Isn't this what we already have with the different pipeline implementations?

Unico

Mime
View raw message