cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Grzegorz Kossakowski <g...@tuffmail.com>
Subject Re: Postable source and servlet services problem
Date Sat, 24 Feb 2007 14:19:05 GMT
Peter Hunsberger napisaƂ(a):
>>
>>
>> Suppose that we have this http_post generator that parser (as XML) the
>> body of POST request. Of course this pipeline will work correctly only
>> for POST requests so suppose we have one. My question is:
>> How cache key and cache validity object could be created for this kind
>> of generator? Could please provide quite detailed description as I would
>> like to understand this issue.
>
> It's generated in exactly the same way as any other cache key. It
> depends completely on the internal implementation of the generator and
> transformer(s) and what they consider to affect the cacheability of
> the results they produce.
>
> Consider first the case of a search form where no data is present in
> the initial presentation of the form. The only requirement here is
> that the form can be uniquely identified with respect to the cache key
> (for example "patient.search" to take an example from our system).
> Now consider the same form that has been filled out by the user but
> that has errors on submission and has to be redisplayed. The final
> result, is generally not cached; the combination of the form and the
> data to be presented is essentially a unique instance and there is no
> point in caching it.
>
> On the Cocoon side you have a couple of ways to handle this. In our
> case, the basic form (with no data) is generated in exactly the same
> way in both cases with the same cache key. However we aggregate that
> result with another generator that generates any data to be presented
> within the form. When no data is present a constant cache key is
> generated and a simple SAX wrapper around what is essentially a null
> result is cached (which may be used across many different form
> combination). When data is present this particular generator always
> returns null cache key and the data is not cached (or the key points
> to a validity that will return false for the validity check). The
> results of the cached form and the sometimes cached data now have an
> aggregate cache key, in one case it is valid and everything in the
> pipeline can be cached. In the other case the aggregate key is not
> valid and the final results of the pipeline are not cached (even
> though partial SAX streams inside the pipeline are). If a user POSTs
> an empty search form the pipeline might produce exactly the same
> results as the original GET that first generated the form; it's not
> the GET or POST that determined the cacheability, it's the data that
> was generated in response to them.
>
> There are other use cases where form data can be cached but I hope 
> this helps?
>
> FWIW, we are starting to move away from a standardized HTTP POST
> response pattern and implement pure AJAX based forms where the data
> exchanges are based on XMLHttpRequest interchanges. This separates the
> generation of the form from the data handling completely, however the
> basics of caching remain the same: if the pipeline that responds to
> the XMLHttpRequest decides that the output can be cached it generates
> a key that uniquely identifies the response. The same sub pipeline
> generates the same results for both a GET, POST and XMLHttpRequest
> under the covers, it doesn't care how the request originated...
Yes, now I see that we've been perceiving the same problem (cacheability 
of POST requests) from different points of view. I've been stating that 
there is no one *generic* generator that can stream POST data and 
generate efficiently caching key for that data. And you've been stating 
that it's not true that POST requests are not cacheable _by their 
nature_ and given good examples that they can be cacheable. I think we 
understand each other, now. Thanks for comments on this.

> I don't like any implementation that completely hides it's workings.
> However, having the key information passed about as metadata to the
> data stream potentially allows for _any_ SAX data stream to become
> part of the final results and still be cached. I can't give you a
> concrete use case, but I'm guessing that this could be used for SOAP
> and other foreign data stream encapsulation. Of course if you really
> want to have that option then that means some kind of standardized
> metadata and that's what standard HTTP headers are all about. So maybe
> the "proper" implementation here would be to use completely formed
> responses and parse the headers! That's real work and no longer
> trivial with no direct benefit for the moment. More-over, nothing you
> are doing would preclude such an implementation in the future; I could
> see some form of standardized SOAP like parser building a cache key
> for a foreign data stream that would then be coupled into the pipeline
> implementation that you are proposing if need be.
>
> Phew, a lot of discussion, but I think it's important; as Cocoon
> separates into discrete blocks we are essentially going to have to
> decide how decoupled the blocks are. Caching often seems to be an
> after thought in distributed systems (which is what we will be
> building) and it's important to understand the implications of the
> design decisions up front. If you had presented your current proposal
> when you originally asked the question I probably wouldn't have even
> responded, but continued to have some nagging thoughts about this
> issue that I never expressed. So forgive the rambling, but it helps
> me even if it doesn't help you...
>

It helps me also, and I agree we will have to discuss how far slackening 
process can advance. I'll start discussion in a week or so, just need 
more time to discover the scope, my point of view etc.
I hope you will join in with your ideas.

-- 
Grzegorz Kossakowski

Mime
View raw message