httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marc Slemko <>
Subject Re: [PATCH] lingering_close performance improvement
Date Mon, 10 Feb 1997 22:05:39 GMT
On Mon, 10 Feb 1997, Roy T. Fielding wrote:

> >No, it DID go away in many cases.  I have a dozen reports of NO_LINGCLOSE
> >eliminating the problem.  The keepalive bit does not explain the
> >difference between 1.1 and 1.2 nor does it explain why people still had
> >problems when they disabled keepalives.  There is another problem.
> All of those reports are pre-1.2b6.  The only problems that we know

I don't think so.  I will ask all the people to try b6 w/lingering_close
enabled if you want, plus your patch...

> about now, that we have any reports about, should be fixed by the patch
> I made.  On my Ultra1, 1.2b7-dev + my patch is now considerably
> faster than 1.1.3, but the only way we can really test it is if
> we place it on one of the high-load servers.
> At the very least, I'd like to commit it so that others can test
> (and so that I can commit other things in http_main.c, like the
> SIGHUP patch).

I will not object to your committing it, but ask that everyone keep in
mind that this may not be what we end up shipping in 1.2 if something
better comes up.

> >> The only concern we have with l_c() right now is the length of the
> >> timeout and the fact that it might block in a read(), both of which
> >> are fixed in my patch which people have failed to vote on.
> >
> >Personally, I'm not sure if 1s is long enough.  If the RTT is greater than
> >1s (and it often is for dialin users downloading multiple pages) then the
> >same problem would seem to be possible.  
> Possible, yes, but we can take care of 99% of the potential problems
> by just concerning ourselves with clients that are sending a stream
> of data, and for them 1s is long enough.  We already handle the PUT/POST
> drain in mod_cgi, so this code just handles pipeliners and pre-input
> error conditions.  Given the concern people have for leaving client
> processes around, I don't think we can do any better without SO_LINGER.

Note that the implementation of SO_LINGER, as defined on some platforms,
blocks on close anyway.

What do you think of Dean's suggestion of keeping a history of sockets
that are lingering and then just going through them each time through the
main loop before we accept a new request?  Assuming, of course, that it
can be implemented cleanly which is not necessarily a valid assumption.

View raw message