httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Behlendorf <br...@hyperreal.com>
Subject Re: Problems with Apache 1.2b1 and HTTP/1.1 pipelining (fwd)
Date Fri, 20 Dec 1996 20:16:07 GMT

---------- Forwarded message ----------
Date: Thu, 19 Dec 1996 15:45:24 -0500
From: Henrik Frystyk Nielsen <frystyk@w3.org>
To: ben@algroup.co.uk, new-httpd@hyperreal.com
Subject: Re: Problems with Apache 1.2b1 and HTTP/1.1 pipelining (fwd)

At 06:56 PM 12/19/96 +0000, Ben Laurie wrote:

Hi Ben,

Oh man - my mail filter was a bit too loose. It picked up the "httpd" and
generated an automated reply for the CERN server. I am sorry.

>Alexei Kosut wrote:
>> 
>> Can someone who knows much about this stuff take a look at this (esp
>> the second URL)? It sounds like something we should look into fixing.
>
>I've taken a look. I'm somewhat mystified that bytes are being lost - you'd
>have thought we'd have seen this in other circumstance, but perhaps not. It
>might help to know which bytes are lost.

The situation that we can see is that sometimes we loose the start of a
response so that we have something like

	" OK CRLF

instead of

	"200 OK CRLF

Ihis appears to happen in the beginning of a TCP packet. It's not evident
that this should have shown up before. As far as I know, libwww is the
first HTTP/1.1 implementation which can do pipelined requests and hence
Apache has never been in the situation that it had 40+ requests almost
immediately for reading.

I don't know if it is clear from the plots that I have provided but it is
clearly a timing problem. It almost always works first time where I get 40
HEAD responses back in individual TCP segments. However, the next time, it
very often goes wrong. The difference between the first and the second time
is probably that the stats are faster due to the file cache.

>It seems to me that not flushing the response would be problematic - a client
>that was not pipelining would presumably wait indefinitely for the end of the
>response before sending the next request. I suppose it might be possible to
>move the flush into the wait for the next request, and only flush if it
wasn't
>already queued.

What you can do - and in fact Jigsaw does this - is to look in the input
buffer before sending a response. If there is a request already pending
then wait you can safely wait sending the response. Note that this is a
function of the size of the response - in a HEAD response it is much more
likely to have an impact than when sending the body as well.

Our preliminary results described in our performance paper shows that
Jigsaw uses somewhat fewer packets than Apache. In the long run this may
improve the overall performance as the line is better at handling fewer but
longer packets.

Thanks for your time!

Henrik

--
Henrik Frystyk Nielsen, <frystyk@w3.org>
World Wide Web Consortium, MIT/LCS NE43-356
545 Technology Square, Cambridge MA 02139, USA




Mime
View raw message