httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Victor J. Orlikowski" <v.j.orlikow...@gte.net>
Subject Re: Thoughts on filter-chain composition (Short)
Date Tue, 12 Sep 2000 16:49:47 GMT
Suppose for a moment, we're dealing once again with a handheld device.
And suppose that we have hacked up a filter to handle CC/PP (W3C working
on it; not a standard yet. Basic idea for those not familiar: Use RDF to
specify, in the request headers, the capabilities of a given piece of
hardware and client software, such as Java VM level or ability to handle
frames or tables).
With the CC/PP headers, we'd be able to determine what kinds of encodings
the said device could handle, and then perform the needed transcoding via
setting up the filter chain in advance. We would have the information on
what kind of content the device can handle, and we would have the info
from the content on the server (i.e. the headers from the gif image, for
example).
But, as has been said, this advance setup can only be partially done,
since not all the info is encoded in the header of the file to be
filtered.
So, the filters need to be written in such a manner that, when a given
filter runs into a "feature" of the content that it cannot handle, 
it is able to insert the proper filter into the chain in order to
take care of things.
Thus does Apache become a transcoding engine.
Unfortunately, the question that also rears its head is, "Do we cache the
results from the transcoding of this content?" The filters are nice, yes,
but the amount of time spent in the transcoding should not be wasted (in
that, we perform the *same* content manipulation for each of, say, 4000 
requests for the same content, since each request has to pass through the
same filter chain). This gets heavy, especially with a lot of graphics
manipulation. 

Just my thoughts,
Victor
-- 
Victor J. Orlikowski
======================
v.j.orlikowski@gte.net
vjo@raleigh.ibm.com
vjo@us.ibm.com

Mime
View raw message