httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <ma...@znep.com>
Subject Re: more lingering_close...
Date Sun, 09 Feb 1997 21:00:36 GMT
On Sun, 9 Feb 1997, Ben Laurie wrote:

> Marc Slemko wrote:
> > One might expect that the server would not generate RSTs to packets until
> > the client had closed its half of the connection; at the time the
> > server sends the RSTs, the connection is only half-closed.  However,
> > that isn't the way it happens.
> 
> Hang on. I understood up to this point (finally), but surely this is wrong?
> If the server isn't doing an l_c(), then it won't have half-closed at this
> point, it will have full-closed, and hence the RST's are exactly what is
> expected. In this case, we should definitely half-close (which is surely the
> point of l_c()?), or wait for the client to complete their send (which may be
> lame - but since the spec doesn't give us a slot at this point in the protocol
> to say "OK" or "Oops", we shouldn't really be talking or closing connections
> yet).

This is a lack of clarity in the terminology used.  If you look at the
state of the server based on the packets recieved and sent by the server,
then during the period between when it got the ACK in response to its FIN
and when the client sends its FIN the connection is in a half-closed
state.  This was not reached by doing a half close (ie. shutdown(sd,1)) by
the server but is rather an intermediate state in the full closing of a
TCP connection.  

A full close requires four packets before it is completed, unless you make
your own standards like MS and send a RST in which case a lame close
requires 3.  The full close is not complete until all four of those
packets have gone through (well, the server can close after 3; ie.
before it sends the final ACK).

When the server sends the FIN, it goes into FIN_WAIT_1.  When it gets an
ACK it goes into FIN_WAIT_2.  Based purely on the state transitions, at
this point a half close has been done regardless of what was passed to the
API.  Then when it gets a FIN from the client it goes into TIME_WAIT and
the connection closes.

If the client has done a half close, then when in the FIN_WAIT_2 state
getting data isn't an error, which is what lingering_close uses to avoid
the problem.

If the client has not done a half close, then it is unclear if getting
data while in the FIN_WAIT_2 state should be an error; that is what I am
referring to above.  Based purely on the state transitions you can't tell
the difference between the two, but when you look at the API calls you
can. 

Waiting for the client to complete their send is one option to fix this
particular problem (and is, in fact, necessary to support persistent
connections for PUT and POSTs) but does not work when you consider
persistent connections.

> 
> It seems to me that this is really a deficiency in the HTTP spec - in that the
> spec leaves us in a situation where it isn't clear whose turn it is to speak.
> But I don't suppose it is fixable before HTTP/2.0...

I don't see it is fixable at all the in HTTP spec without loosing
functionality. The whole point of pipelining is that there is no need to
know whose turn it is to speak.  If you limit the protocol so that only
one end can be sending at a time, you reduce performance a lot.  It is the
same thing as NNTP encounters without streaming; it is a lock-step
protocol and so has an intrinsic limit on performance per transaction on
small transactions based on the RTT independent of the bandwidth. 

> > *** THIS IS IMPORTANT: *** When the client gets the RST, the RST
> > includes the sequence number of the last packet from the server
> > ACKed by the client at the time the RST was sent.  The client WILL
> > normally flush any buffered incoming data received from the server
> > after that sequence number.  This means that if the client is to
> > reliably get the entire error message to display, the server MUST
> > NOT send a RST until it has received an ACK of the last packet in
> > the error message it sends to the client.  Nothing the client
> > can do without modifying the TCP stack can change this, no matter what
> > it does with errors.
> > 
> > I think this illustrates the issue at the TCP layer and how you have
> > problems when the server closes the connection but the client keeps
> > sending.  It is only an example, however, since the above can
> > be solved by simply not sending a response until we get all the
> > data.  A bit wasteful and lame, but a possible workaround.  I think 
> > some of Netscape's newer servers do this; older ones appear to act
> > just like Apache 1.1.x.  
> 
> As I say above, the problem is not really in the TCP layer, it is our fault
> for closing the connection early.

That could be argued in this case.  It would be worth worrying about if it
weren't for the persistent connection case that makes lingering_close
necessary anyway.


Mime
View raw message