httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <dgau...@arctic.org>
Subject Re: a concern
Date Tue, 28 Jan 1997 09:49:58 GMT
Oh this sucks.

Here's my take.  Apache is unlikely to be unseated as the most popular
server out there in the near future.  So whatever we release, w.r.t. 
HTTP/1.1, is going to be "standard" enough that browser vendors will be
forced to support it.  I'm not advocating that we do something bad, I'm
just saying that if we do it right then they'll be forced to do it right.
But of course if they have to support old CERN servers then they may be in
a position where they just have no choice and we'll have to be helpful. 

So my solution is to add another BUFF * flag that says "if a read would
block then flush the write buffer before blocking".  We can set that flag
during read_request_line() (and I think unset it everywhere else). 

I can do this too while I'm mucking with buff.c. 

Dean

On Mon, 27 Jan 1997, Alexei Kosut wrote:

> With Dean's new chunking code, Apache won't flush its output if there
> is text waiting in the input buffer. This is better behavior,
> especially if the client is pipelining requests.
> 
> However, I had a thought. I remember some discussion on http-wg a year
> or so ago about how some clients, when they POST something, tack on an
> extra CRLF at the end, without accounting for it in the
> Content-length. Now, Apache handles this gracefully, by allowing a
> request to be prepended by any number of empty lines.
> 
> However, let's imagine a client that tacked on this extra CRLF, and
> also opened a persistent connection. With the current rev of the
> Apache code, the server won't give the client its last block of data,
> because it will see the CRLF in the input buffer, and think that the
> client is sending a second request. Assuming the browser never does make
> another request, it won't get its data until the keepalive timeout
> times out.
> 
> Specifically, at leastNetscape Navigator 3.01 for the Macintosh
> displays this particular bug.
> 
> We could do several things:
> 
> a) only activate the "better" bflush behavior if the previous request
> was HTTP/1.1. Presumably all HTTP/1.1 clients are better
> behaved. However, I wouldn't bet the farm on it. Given that it
> wouldn't take much to make most modern HTTP clients (Netscape
> included) minimally HTTP/1.1 compliant (throw in chunking support,
> recognition of a few more headers, and you're done), it's possible
> that at some point this may be done without fixing this bug.
> 
> b) only do it for GET requests. These can't have an entity body,
> and therefore won't have this problem. This doesn't feel like the
> right solution, though.
> 
> c) Add a flush into read_request_line()'s while() loop that checks for
> empty lines - this way, if the empty lines are sent, the server will
> flush its output. This means that these clients will have to take
> a performance hit when pipelining POST requests (come to think of
> it... why *would* you want to pipeline a POST request?), but I'm
> comfortable with that.
> 
> d) Ignore the problem and blame the broken clients. This is a Bad
> Idea, IMHO, since 80+% of the clients out there probably have this
> problem.
> 
> P.S. I could have sworn one of the Netscape people (Lou Montulli, I
> think) said he was going to fix this (probably by adding 2 to the
> Content-length, as taking out the CRLF breaks CGI scripts when running
> the CERN server, and maybe others), but I guess he never did.
> 
> --
> ________________________________________________________________________
> Alexei Kosut <akosut@nueva.pvt.k12.ca.us>      The Apache HTTP Server
> URL: http://www.nueva.pvt.k12.ca.us/~akosut/   http://www.apache.org/
> 
> 


Mime
View raw message