cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patrik Nyborg <patrik.nyb...@aderagroup.com>
Subject RE: [RT] Cocoon in cacheland
Date Tue, 28 Nov 2000 07:13:32 GMT
> How should we store the cache? 
> It's potentially 'rather big', but it's crucial we have it fast. 
> I'd be tempted to use a two layer cache - first layer in ram, 
> and second layer on backing store. 
Hmmmm, I would prefer a N-level cache; for example I would distinguish
between shared ram cache and personalized ram cache where the later holds
only unique (highly requested) personalized stuff (which is aggressivly
fetched/flushed when the visitor enters/leaves the site).

Hopefully the cache architecture will be designed in such a way, that
N-levels could be configured if needed but the reference implementation uses
some ordinary 2-level stuff (or perhaps 1-level).
Conceptually some sort of configurable cache chaining...
Furthermore the design will probably be better if you have the N-level
case in mind when thinking about the cache stuff as it is more general.


I think to concept of anticiparallellism is important; that a backend server
(on a remote host) could handle all the update/precomputation operations.
The Coccon delivery engine would _only_ perform generations/transformations
etc. that _must_ be handled in realtime.
Basically, kill all realtime ordo(>=N or something) operations.

I'm not suggesting that this concept should be implemented but it would of
course be nice if it would be possible to _hook_ in such functionality
from the API.


> When something is used, it's loaded from disk, and when ram gets full,
> we stick it back on disk.
Once again, it would be nice it's possible to plug in some kind of adaptable
behaviour; for example there is no need to flush stuff (indicated by
realtime behaviour) that:
* doesn't take long to generate (faster to generate than to fetch).
* is invalidated frequently
* ...


So basically what I'm saying is; make it possible for developers to plug in
their own caching stuff in a easy/clean way. The reference implementation
could be kinda straightforward, aka handle the common situations.



To cache or not to cache? Remember:
"speed is god, time is the devil"



Patrik Nyborg, Adera+

Mime
View raw message