tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michal Mosiewicz <m...@interdata.com.pl>
Subject Re: Discussion: AJP next
Date Fri, 11 Feb 2000 02:55:38 GMT
Jean-Luc Rochat wrote:
> [...]
> Feedback welcome.

Just one more thing...

I've been pretty busy these days and I'm catching up with this
discussion.

In september I've been researching something that in my projects would
give me about 3 to 20 times speedups - a decent caching algorithm.

Something similiar already exists in resin. You can see it's benchmarks
to get the figures. The main idea is that even in case of fully dynamic
content, you may point areas that are more and less dynamic. Some data
may be very volatille and you may need to read them from database or
other storage each new request comes, but some can be valid for several
seconds or minutes. 

At first I tried to accomplish my data caching by using dispatchers and
routing some dispatch requests through apache, so some subrequests would
be cached. But then I noticed that it is not that good idea.

Finally I tested a technique to 'bracket' some parts of the content, and
identify them through references.

How it works (or is intended to work) - basically when I send the
response to apache, I can mark some part of the response data with a
unique reference id and the expiration time. Then - once the response is
received in apache, this marked data is stored in a local heap and
indexed using the reference id.

Of course, the java backends also stores locally this reference to the
data fragment that has just been sent to apache, so it knows that it
doesn't need to send the whole content again. Instead it's sufficient
only to send the reference with the next request. Apache would be able
to use this reference, to get the data from the heap and include it in
the response. 

I didn't finish implementing it cause there was (and still are)
architectural issues making it hard to use. But still the figures was
very promising. For example, that time I was developing a kind of portal
website. The most frequently viewed pages involved a lot of data taken
from database. On a pretty reasonable hardware, I couldn't serve those
pages faster than in 150-250ms. However for most parts of those pages it
would be harmless if I cached them for several seconds, sometimes even
minutes wouldn't make much difference. I tested some code using hacked
JServ. It appeared, that the above delay could be easily shorted to less
than 15ms. Of course - sometimes I could cache the whole page, and I
could serve it even faster - but that's pretty obvious. However there
are many cases, that you just can't cache the whole page, while you are
free to cache some/most parts. 

The more difficult part of this is how to implement such a mechanism in
JSDK. It's really bad idea that subrequests, i.e. included dispatches
are not able to pass the information useful for caching (like
expiration). It could be the easiest just to use them.

But the real gain would be if you could just mark some cacheable areas
while sending the response. It's easy to imagine how powerful it could
be in XML based tools like Cocoon, if you could just mark some cacheable
areas using custom tags.

Anyhow, foreseeing it in the protocol would be a good start to introduce
some API hooks to use it in JSDK.

-- Mike

Mime
View raw message