httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <jus...@erenkrantz.com>
Subject Caching incomplete responses was Re: re-do of proxy request body handling - ready for review
Date Thu, 03 Feb 2005 08:34:36 GMT
On Thu, Feb 03, 2005 at 09:06:04AM +0200, Graham Leggett wrote:
> Justin Erenkrantz wrote:
> 
> >I don't see any way to implement that cleanly and without lots of undue 
> >complexity.  Many dragons lay in that direction.
> 
> When I put together the initial framework of mod_cache, solving this 
> problem was one of my goals.

While this may indeed be a worthy goal, the code that has been in mod_cache
could not do this.

> >How do we know when another worker has already started to fetch a page?
> 
> Because there is an (incomplete) entry in the cache.

How?  Can we do this without tying the cache indexing to shared memory?
Remember that mod_disk_cache currently has no list of entries: it is either
cached or it isn't.  (The lookup is done via a file open() call.)

If we add shared memory, I fail to see the benefit in this corner case
outweighing the cost incurred in the common case.  We would now have to
introduce shared memory and locking through mod_(disk)_cache just to handle
this one case.  This will unnecessarily slow down the overall operation of the
cache for everything else.  A fair portion of the speed of mod_cache and
mod_disk_cache comes from the fact that there are no locks or shared memory
involved (and partially why mod_mem_cache is often worse than no caching at
all!).

This doesn't start to hit upon the issues with shared memory in general for
this particular problem.  For example, the shared memory cache index would be
lost on power or system crash.  One way to address this would be to introduce
paging on disk of the central index - but I believe that is *way* too complex
for httpd.

> >How do we even know if the response is even cacheable at all?
> 
> RFC2616.

Yes, I know the RFC.  But, as I said in my earlier reply, the RFC does not
help when we don't have the response yet!

This comes back to the following race condition:

- We have no valid, cached representation of resource Z on origin server D.
- Client A makes a request for resource Z on origin server D.
- Client B makes a request for resource Z on origin server D.
- Representation of Z is served at some later time by origin server D.

What should Client B do?  There is no response to Client A's request yet.
Should Client B block until we know whether the response from Client A is
cacheable?  Then, if it turns out that it wasn't cacheable, then we have to
request the representation of Z after we found out that it didn't apply.  Or,
should it make a duplicate request under the pessimistic assumption that the
response will be non-cachable.

At what point in the process should Client B block on Client A?  Should Client
B block only if there has been a portion of the body (but headers) received?

The issue I have is that optimistically assuming we can cache a response
without seeing that response in its entirety is dangerous.  I think the safe
(and prudent) behavior is to assume that any non-complete response isn't
cacheable: we should immediately issue an additional request for resource Z.

> >How do we know when the content is completed?
> 
> Because of a flag in the cache entry telling us.

Without the introduction of shared memory, I don't believe this is a realistic
strategy.

> >For example, if the response is chunked, there is no way to know what 
> >the final length is ahead of time.
> 
> We have no need to know. The "in progress" cache flag is only going to 
> be marked as "complete" when the request is complete. If that request 
> was chunked, streamed, whatever makes no difference.

Actually, yes, mod_disk_cache needs to know the full length ahead of time.
mod_disk_cache never actually 'reads' the file.  It relies on sendfile() doing
zero-copy between the network card and the hard drive.  Successful zero-copy
necessarily requires the complete length of the file known ahead of time, or
it can't be used.

Furthermore, the APR file buckets do not know how to handle multiple
EOF-seeing buckets.  There is no clean way to say, "Hey, read to the end of
the file.  Then pick up where you left off, then, oh, read again to EOF."
When do you stop?  How do you know when to stop?  Can you ever know?  (A
shared memory count that says how much data is available?  Oh joy.)

What happens if the origin server disconnects our request for Client A?  What
should we do to Client B then?  How does Client B's httpd instance even know
that it disconnected - instead of Origin D just being stalled temporarily?
Can we now abort Client B's connection?  Should we issue a new request to
Origin D on behalf of Client B?  (Perhaps use byterange?)

How do we handle a crash in the httpd instance responding to Client A while it
is storing an incomplete response?  Can we detect this?  (A crash in any
program attached to a shmem segment may corrupt the entire segment.)

What if we can't serve a chunked response back to Client B?  (It could be an
HTTP/1.0 client.)

> As the cache was designed to cache multiple variants of the same URL, 
> Vary should not be a problem. If we are still waiting for the initial 
> response, then we have no cache object yet - the race condition is still 
> there, but a few orders of magnitude shorter in duration.

No, mod_cache is not designed to handle multiple cached variants of the same
resource.  The only thing it does is discard/ignore the cache when it sees
that the Vary conditions are not met.

> >Additionally, with this strategy, if the first client to request a page 
> >is on a slow link, then other clients who are on faster links will be 
> >stalled while the cached content is stored and then served.
> 
> If this is happening now then it's a design flaw in mod_cache.

No, it's not specific to mod_cache.  It's part of the fundamental synchronous
network design of httpd.

> Cache should fill as fast as the sender will go, and the client should 
> be able to read as slow as it likes.

Nope.  It can't work like that.  You have only have one process.

> This is important to ensure backend servers are not left hanging around 
> waiting for slow frontend clients.

This is why Paul and others mentioned the Event MPM, and why Ron suggested a
separate thread.  However, it isn't a portable solution as threads aren't
available everywhere (*ahem* prior to FreeBSD 5.3 *ahem*).

There is no current mechanism to disentangle the speed at which we read the
backend server response from the speed at which we write the response to the
frontend.  If writing the response to the client blocks, then it blocks us
from reading the response from the backend.  httpd is completely synchronous
in this regard and always has been.  Furthermore, such a disentanglement would
face problems due to the filter design as both CACHE_SAVE and CACHE_OUT just
operate as output filters.

> >The downside of stalling in the hope that we'll be able to actually 
> >serve from our cache because another process has made the same request 
> >seems much worse to me than our current approach.  We could end up 
> >making the client wait an indefinite amount of time for little advantage.
> 
> There have been bugs outstanding in mod_proxy v1.3 complaining about 
> this issue - the advantage to fixing this is real.

I believe pre-caching/priming strategies are easier solutions that produce the
same result at a fraction of the cost and effort.

> >The downside of the current approach is that we introduce no performance 
> >penalty to the users at the expense of additional bandwidth towards the 
> >origin server: we essentially act as if there was no cache present at all.
> 
> But we introduce a performance penalty to the backend server, which must 
>  now handle load spikes from clients. This problem can (and has been 
> reported in the past to) have a significant impact on big sites.

I really bet those big sites probably don't even notice.  If a site is serving
ISO images, one or two clients here or there don't really matter.  The overall
net effect of caching would produce more benefits to this big site than not
having the cache at all.

> >I would rather focus on getting mod_cache reliable than rewriting it all 
> >over again to minimize a relatively rare issue.  If it's that much of a 
> >problem, many pre-caching/priming strategies are also available.  -- justin
> 
> Nobody is expecting a rewrite of the cache, and this issue is definitely 
> not rare. I'll start looking at this when I finished getting the LDAP 
> stuff done.

I think solving this correctly would indeed entail a complete rewrite of the
caching engine.  Anything short of that would likely be a half-hearted
solution that will introduce more bugs than it'll fix.  -- justin

Mime
View raw message