httpd-cvs mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roy Fielding <field...@hyperreal.com>
Subject cvs commit: apache/htdocs/manual/misc fin_wait_2.html
Date Thu, 30 Jan 1997 00:48:33 GMT
fielding    97/01/29 16:48:33

  Modified:    htdocs/manual/misc  fin_wait_2.html
  Log:
  Adjusted some of the explanations of the FIN_WAIT_2 problems to
  accurately reflect the current status, reasons why it occurs, and
  what client authors should be doing.  Also reformatted my mail
  message appendix so that it is applicable to a non-Apache audience.
  
  Revision  Changes    Path
  1.3       +88 -72    apache/htdocs/manual/misc/fin_wait_2.html
  
  Index: fin_wait_2.html
  ===================================================================
  RCS file: /export/home/cvs/apache/htdocs/manual/misc/fin_wait_2.html,v
  retrieving revision 1.2
  retrieving revision 1.3
  diff -C3 -r1.2 -r1.3
  *** fin_wait_2.html	1997/01/28 08:44:45	1.2
  --- fin_wait_2.html	1997/01/30 00:48:31	1.3
  ***************
  *** 31,37 ****
    the system is rebooted.  If the system does not have a timeout and
    too many FIN_WAIT_2 connections build up, it can fill up the space
    allocated for storing information about the connections and crash
  ! the kernel.  The connections in FIN_WAIT_2 do not tie up a httpd
    process.<P>
    
    <H2><LI>But why does it happen?</H2>
  --- 31,37 ----
    the system is rebooted.  If the system does not have a timeout and
    too many FIN_WAIT_2 connections build up, it can fill up the space
    allocated for storing information about the connections and crash
  ! the kernel.  The connections in FIN_WAIT_2 do not tie up an httpd
    process.<P>
    
    <H2><LI>But why does it happen?</H2>
  ***************
  *** 50,67 ****
    connection stays in the FIN_WAIT_2 state until one of the following
    happens:<P>
    <UL>
  ! 	<LI>The buggy client  opens a new connection to the same or a different
  ! 	    site, which causes it to fully close the connection.
  ! 	<LI>The user exits the client which, on some (most?) clients
    	    causes the OS to fully shutdown the connection.
    	<LI>The FIN_WAIT_2 times out, on servers that have a timeout
    	    for this state.
    </UL><P>
    If you are lucky, this means that the buggy client will fully close the
    connection and release the resources on your server.  However, there
  ! are many cases where things, such as a dialup client disconnecting from
  ! their provider before closing the client, cause it to remain open.
  ! <STRONG>This is a bug in the browser.</STRONG>  <P>
    
    The clients on which this problem has been verified to exist:<P>
    <UL>
  --- 50,72 ----
    connection stays in the FIN_WAIT_2 state until one of the following
    happens:<P>
    <UL>
  ! 	<LI>The client opens a new connection to the same or a different
  ! 	    site, which causes it to fully close the older connection on
  !             that socket.
  ! 	<LI>The user exits the client, which on some (most?) clients
    	    causes the OS to fully shutdown the connection.
    	<LI>The FIN_WAIT_2 times out, on servers that have a timeout
    	    for this state.
    </UL><P>
    If you are lucky, this means that the buggy client will fully close the
    connection and release the resources on your server.  However, there
  ! are some cases where the socket is never fully closed, such as a dialup
  ! client disconnecting from their provider before closing the client.
  ! In addition, a client might sit idle for days without making another
  ! connection, and thus may hold its end of the socket open for days
  ! even though it has no further use for it.
  ! <STRONG>This is a bug in the browser or in its operating system's
  ! TCP implementation.</STRONG>  <P>
    
    The clients on which this problem has been verified to exist:<P>
    <UL>
  ***************
  *** 69,118 ****
    	<LI>Mozilla/2.02 (X11; I; FreeBSD 2.1.5-RELEASE i386)
    	<LI>Mozilla/3.01Gold (X11; I; SunOS 5.5 sun4m)
    	<LI>MSIE 3.01 on the Macintosh
    </UL><P>
    
  ! It is expected that many other clients have the same problem.<P>
    
  ! Apache can <STRONG>NOT</STRONG> do anything to avoid this other
  ! than disabling persistent connections for all buggy clients, just
  ! like we recommend doing for Navigator 2.x clients due to other bugs
  ! in Navigator 2.x.  As far as we know, this happens with all servers
  ! that support persistent connections including Apache 1.1.x and
  ! 1.2.<P>
  ! 
  ! <H3>Something is broken</H3>
    
    While the above bug is a problem, it is not the whole problem.
  ! There is some other problem involved; some people do not have any
  ! serious problems on 1.1.x, but with 1.2 enough connections build
  ! up in the FIN_WAIT_2 state to crash their server.  This is due to
  ! a function called <CODE>lingering_close()</CODE> which was added
    between 1.1 and 1.2.  This function is necessary for the proper
  ! handling of PUTs and POSTs to the server as well as persistent
  ! connections.  What it does is read any data sent by the client for
    a certain time after the server closes the connection.  The exact
  ! reasons for doing this are somewhat complicated but involve what
    happens if the client is making a request at the same time the
  ! server closes the connection; without it, the client would get an
  ! error.  With it the client just gets the closed connection and
  ! knows to retry.  See the <A HREF="#appendix">appendix</A> for more
  ! details.<P>
    
    We have not yet tracked down the exact reason why
    <CODE>lingering_close()</CODE> causes problems.  Its code has been
  ! thoroughly reviewed.  It is possible there is some problem in the BSD
  ! TCP stack which is causing this.  Unfortunately, we are not able to
  ! easily replicate the problem on test servers so it is difficult to
  ! debug.  We are still working on the problem.  <P>
    
    <H2><LI>What can I do about it?</H2>
    
    There are several possible workarounds to the problem, some of
    which work better than others.<P>
    <H3>Add a timeout for FIN_WAIT_2</H3>
  ! The obvious workaround is to simply have a timeout for the FIN_WAIT_2
  ! state.  This is not specified by the RFC and could be claimed to be a
  ! violation of the RFC, however it is becoming necessary in many cases.
    The following systems are known to have a timeout:
    <P>
    <UL>
  --- 74,144 ----
    	<LI>Mozilla/2.02 (X11; I; FreeBSD 2.1.5-RELEASE i386)
    	<LI>Mozilla/3.01Gold (X11; I; SunOS 5.5 sun4m)
    	<LI>MSIE 3.01 on the Macintosh
  + 	<LI>MSIE 3.01 on Win95
    </UL><P>
    
  ! It is expected that many other clients have the same problem. What a
  ! client <STRONG>should do</STRONG> is periodically check its open
  ! socket(s) to see if they have been closed by the server, and close their
  ! side of the connection if the server has closed.  This check need only
  ! occur once every few seconds, and may even be detected by a OS signal
  ! on some systems (e.g., Win95 and NT clients have this capability, but
  ! they seem to be ignoring it).<P>
  ! 
  ! Apache <STRONG>cannot</STRONG> avoid these FIN_WAIT_2 states unless it
  ! disables persistent connections for the buggy clients, just
  ! like we recommend doing for Navigator 2.x clients due to other bugs.
  ! However, non-persistent connections increase the total number of
  ! connections needed per client and slow retrieval of an image-laden
  ! web page.  Since non-persistent connections have their own resource
  ! consumptions and a short waiting period after each closure, a busy server
  ! may need persistence in order to best serve its clients.<P>
  ! 
  ! As far as we know, the client-caused FIN_WAIT_2 problem is present for
  ! all servers that support persistent connections, including Apache 1.1.x
  ! and 1.2.<P>
    
  ! <H3>Something in Apache may be broken</H3>
    
    While the above bug is a problem, it is not the whole problem.
  ! Some users have observed no FIN_WAIT_2 problems with Apache 1.1.x,
  ! but with 1.2b enough connections build up in the FIN_WAIT_2 state to
  ! crash their server.  We have not yet identified why this would occur
  ! and welcome additional test input.<P>
  ! 
  ! One possible (and most likely) source for additional FIN_WAIT_2 states
  ! is a function called <CODE>lingering_close()</CODE> which was added
    between 1.1 and 1.2.  This function is necessary for the proper
  ! handling of persistent connections and any request which includes
  ! content in the message body (e.g., PUTs and POSTs).
  ! What it does is read any data sent by the client for
    a certain time after the server closes the connection.  The exact
  ! reasons for doing this are somewhat complicated, but involve what
    happens if the client is making a request at the same time the
  ! server sends a response and closes the connection. Without lingering,
  ! the client might be forced to reset its TCP input buffer before it
  ! has a chance to read the server's response, and thus understand why
  ! the connection has closed.
  ! See the <A HREF="#appendix">appendix</A> for more details.<P>
    
    We have not yet tracked down the exact reason why
    <CODE>lingering_close()</CODE> causes problems.  Its code has been
  ! thoroughly reviewed and extensively updated in 1.2b6.  It is possible
  ! that there is some problem in the BSD TCP stack which is causing the
  ! observed problems.  It is also possible that we fixed it in 1.2b6.
  ! Unfortunately, we have not been able to replicate the problem on our
  ! test servers.<P>
    
    <H2><LI>What can I do about it?</H2>
    
    There are several possible workarounds to the problem, some of
    which work better than others.<P>
  + 
    <H3>Add a timeout for FIN_WAIT_2</H3>
  ! 
  ! The obvious workaround is to simply have a timeout for the FIN_WAIT_2 state.
  ! This is not specified by the RFC, and could be claimed to be a
  ! violation of the RFC, but it is widely recognized as being necessary.
    The following systems are known to have a timeout:
    <P>
    <UL>
  ***************
  *** 172,178 ****
    section of code being similar to that which was in 1.1.  If you do
    this, be aware that it can cause problems with PUTs, POSTs and
    persistent connections, especially if the client uses pipelining.  
  ! That said, it is no worse than on 1.1 and I assume that keeping your 
    server running is quite important.<P>
    
    To compile without the <CODE>lingering_close()</CODE> function, add
  --- 198,204 ----
    section of code being similar to that which was in 1.1.  If you do
    this, be aware that it can cause problems with PUTs, POSTs and
    persistent connections, especially if the client uses pipelining.  
  ! That said, it is no worse than on 1.1, and we understand that keeping your 
    server running is quite important.<P>
    
    To compile without the <CODE>lingering_close()</CODE> function, add
  ***************
  *** 190,196 ****
    <CODE>lingering_close</CODE>.  On some systems, it could possibly work
    better so it may be worth a try if you have no other alternatives. <P>
    
  ! To try it, add <CODE>-DUSE_SO_LINGER</CODE>  to the end of the
    <CODE>EXTRA_CFLAGS</CODE> line in your <CODE>Configuration</CODE>
    file, rerun <CODE>Configure</CODE> and rebuild the server.  <P>
    
  --- 216,222 ----
    <CODE>lingering_close</CODE>.  On some systems, it could possibly work
    better so it may be worth a try if you have no other alternatives. <P>
    
  ! To try it, add <CODE>-DUSE_SO_LINGER -DNO_LINGCLOSE</CODE>  to the end of
the
    <CODE>EXTRA_CFLAGS</CODE> line in your <CODE>Configuration</CODE>
    file, rerun <CODE>Configure</CODE> and rebuild the server.  <P>
    
  ***************
  *** 200,207 ****
    
    <H3>Increase the amount of memory used for storing connection state</H3>
    <DL>
  ! <DT>BSD based networking code: <DD>BSD stores network data such as connection
  ! states in something called a mbuf.  When you get so many connections
    that the kernel does not have enough mbufs to put them all in, your
    kernel will likely crash.  You can reduce the effects of the problem
    by increasing the number of mbufs that are available; this will not
  --- 226,234 ----
    
    <H3>Increase the amount of memory used for storing connection state</H3>
    <DL>
  ! <DT>BSD based networking code:
  ! <DD>BSD stores network data, such as connection states,
  ! in something called an mbuf.  When you get so many connections
    that the kernel does not have enough mbufs to put them all in, your
    kernel will likely crash.  You can reduce the effects of the problem
    by increasing the number of mbufs that are available; this will not
  ***************
  *** 222,260 ****
    
    <H2><A NAME="appendix"><LI>Appendix</H2>
    <P>
  ! Below is a message from Roy Fielding that details some of the
  ! reasons why some type of function that has the functionality of
  ! <CODE>lingering_close()</CODE> is necessary.
  ! 
  ! <PRE>
  ! Date: Tue, 21 Jan 1997 01:15:38 -0800
  ! From: "Roy T. Fielding" &lt;fielding@liege.ICS.UCI.EDU&gt;
  ! Subject: Re: lingering_close() 
  ! 
  ! Sorry, I thought everyone was up to speed on this problem (and I just
  ! managed to catch up on my apache mail, finally).  This is noted a couple
  ! times in the HTTP specs, but most of the discussion was between myself,
  ! Henrik, rst, and Dave Raggett in the hallways of MIT (which is why it
  ! doesn't appear in our archives).
    
    If a server closes the input side of the connection while the client
    is sending data (or is planning to send data), then the server's TCP
  ! stack will signal an RST (reset, not Robert) back to the client.  Upon
    receipt of the RST, the client will flush its own incoming TCP buffer
    back to the un-ACKed packet indicated by the RST packet argument.
    If the server has sent a message, usually an error response, to the
    client just before the close, and the client receives the RST packet
    before its application code has read the error message from its incoming
  ! TCP buffer, then the RST will flush the error message before the client
  ! application has a chance to see it, and thus the client is left thinking
  ! that the connection failed for no apparent reason.
    
    There are two conditions under which this is likely to occur:
  !   1) sending POST or PUT data without proper authorization
  !   2) sending multiple requests before each response (pipelining) 
  !      and one of the middle requests resulting in an error or
  !      other break-the-connection result.
  ! 
    The solution in all cases is to send the response, close only the
    write half of the connection (what shutdown is supposed to do), and
    continue reading on the socket until it is either closed by the
  --- 249,285 ----
    
    <H2><A NAME="appendix"><LI>Appendix</H2>
    <P>
  ! Below is a message from Roy Fielding, one of the authors of HTTP/1.1.
  ! 
  ! <H3>Why the lingering close functionality is necessary with HTTP</H3>
  ! 
  ! The need for a server to linger on a socket after a close is noted a couple
  ! times in the HTTP specs, but not explained.  This explanation is based on
  ! discussions between myself, Henrik Frystyk, Robert S. Thau, Dave Raggett,
  ! and John C. Mallery in the hallways of MIT while I was at W3C.<P>
    
    If a server closes the input side of the connection while the client
    is sending data (or is planning to send data), then the server's TCP
  ! stack will signal an RST (reset) back to the client.  Upon
    receipt of the RST, the client will flush its own incoming TCP buffer
    back to the un-ACKed packet indicated by the RST packet argument.
    If the server has sent a message, usually an error response, to the
    client just before the close, and the client receives the RST packet
    before its application code has read the error message from its incoming
  ! TCP buffer and before the server has received the ACK sent by the client
  ! upon receipt of that buffer, then the RST will flush the error message
  ! before the client application has a chance to see it. The result is
  ! that the client is left thinking that the connection failed for no
  ! apparent reason.<P>
    
    There are two conditions under which this is likely to occur:
  ! <OL>
  ! <LI>sending POST or PUT data without proper authorization
  ! <LI>sending multiple requests before each response (pipelining) 
  !     and one of the middle requests resulting in an error or
  !     other break-the-connection result.
  ! </OL>
  ! <P>
    The solution in all cases is to send the response, close only the
    write half of the connection (what shutdown is supposed to do), and
    continue reading on the socket until it is either closed by the
  ***************
  *** 262,280 ****
    That is what the kernel is supposed to do if SO_LINGER is set.
    Unfortunately, SO_LINGER has no effect on some systems; on some other
    systems, it does not have its own timeout and thus the TCP memory
  ! segments just pile-up until the next reboot (planned or not).
  ! 
  ! That is why rst coded-up a linger replacement.  As I recall, he said at
  ! the time that it needed further testing, which we never got around to
  ! doing.  From the descriptions I have read, it sounds like the lingering
  ! close code is doing something wrong when it is timed-out, since that
  ! is what happens if a client does not close its connection.
    
    Please note that simply removing the linger code will not solve the
  ! problem -- it only moves it to a different and much harder to detect one.
  ! 
  ! .....Roy
  ! </PRE>
    </OL>
    <!--#include virtual="footer.html" -->
    </BODY>
  --- 287,296 ----
    That is what the kernel is supposed to do if SO_LINGER is set.
    Unfortunately, SO_LINGER has no effect on some systems; on some other
    systems, it does not have its own timeout and thus the TCP memory
  ! segments just pile-up until the next reboot (planned or not).<P>
    
    Please note that simply removing the linger code will not solve the
  ! problem -- it only moves it to a different and much harder one to detect.
    </OL>
    <!--#include virtual="footer.html" -->
    </BODY>
  
  
  

Mime
View raw message