httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dean Gaudet <>
Subject Re: SO_LINGER test results
Date Mon, 10 Feb 1997 00:49:31 GMT
Ack, looks like I goofed by setting SO_LINGER on the listen socket, and
not on the socket returned from accept.  Apparently linux doesn't inherit
those settings from the original socket.  Attached is a corrected version
of the test program.  After re-running the tests, the only change was on
linux -- now linux behaves like IRIX described below. 


On Sun, 9 Feb 1997, Dean Gaudet wrote:

> Ok so I decided to try tinkering with SO_LINGER.  I figured that I should
> be able to test if an OS supports it "properly" by writing a simple server
> that writes two packets, spaced apart.  In between the two I disconnect
> the client from the network.  So the second packet is definately going to
> be in the queue still on the server when the close happens.  Then I print out
> all the time stamps involved and see if the server gets control after the
> linger timeout.
> I wanted to set the linger timeout to 10 seconds, but get this:
>     IRIX and BSDI man pages and header files make no mention of what
>     unit of time the l_linger field is in.
>     Solaris man pages say l_linger is in seconds.
>     Linux man pages say l_linger is in hundredths of seconds.
> So for everything but linux I used l_linger = 10, and linux I used
> l_linger = 1000.
> If my test is correct, the results are pitiful:
>     IRIX 5.3:
>     IRIX 6.2:
> 	blocks the calling task in close() but definately doesn't respect
> 	the 10 second timeout
>     Solaris 2.5.1:
>     Linux-2.0.27:
> 	Don't block the task; socket is put into FIN_WAIT_1 and it sticks
> 	around a lot longer than 10 seconds.
>     BSDI 2.1:
> 	works properly
> I'm including the test program.  Here's the sequence for using it:
>     0. On the server, tweak the l_linger setting appropriately.  Then
> 	"gcc test-linger.c" (you might need to add -lnsl -lsocket).
>     1. ./a.out
>     2. On the client "telnet serverip port#" and watch the first packet
> 	arrive.  Now disconnect the client from the network.
>     3. The server will send the second packet, and go into close().
> 	When it's done with the close() it'll say so... every line is
> 	stamped with time(0).
> Can someone tell me if this procedure is correct?
> Dean

View raw message