Received: by taz.hyperreal.com (8.8.4/V2.0) id TAA26466; Tue, 11 Feb 1997 19:01:33 -0800 (PST) Received: from twinlark.arctic.org by taz.hyperreal.com (8.8.4/V2.0) with SMTP id TAA26287; Tue, 11 Feb 1997 19:01:21 -0800 (PST) Received: (qmail 21439 invoked by uid 500); 12 Feb 1997 03:01:16 -0000 Date: Tue, 11 Feb 1997 19:01:16 -0800 (PST) From: Dean Gaudet To: new-httpd@hyperreal.com Subject: Re: [PATCH] lingering_close performance improvement In-Reply-To: <199702120222.VAA22330@shado.jaguNET.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: new-httpd-owner@apache.org Precedence: bulk Reply-To: new-httpd@hyperreal.com That's why I suggested a MaxSocketsPendingClose option... for both the systems with limited fds and those that have so many vhosts they have limited fds to work with with. The server would have to stay in the cleanup loop until an fd freed up to enter select(). It's too bad the mutex stuff can't go for 1 second timeouts. I guess it could with an alarm(1). Then the child could alternate between waiting for a request and processing closing sockets. It's only "multithreaded" in a really small section of code. BTW, "full" multithreaded is equally hard on systems with limited numbers of fds... they'll run into the same problem. (Unless you're talking a global fd limit.) Dean On Tue, 11 Feb 1997, Jim Jagielski wrote: > Chuck Murcko wrote: > > > > Roy T. Fielding wrote: > > > > > > >I will not object to your committing it, but ask that everyone keep in > > > >mind that this may not be what we end up shipping in 1.2 if something > > > >better comes up. > > > > > > Yes, absolutely -- I just want the testing to be based on what we have now > > > instead of what we had last week. > > > > > > >What do you think of Dean's suggestion of keeping a history of sockets > > > >that are lingering and then just going through them each time through the > > > >main loop before we accept a new request? Assuming, of course, that it > > > >can be implemented cleanly which is not necessarily a valid assumption. > > > > > > That is basically what RST's multithreaded code does, and I think it is > > > a much better idea than what we are currently doing. I just don't know > > > how to do it without introducing multithreading, etc. > > > > > True, but this is really the only correct (and nonkludgey) way to deal > > with pipelined request errors, isn't it? We'll need this anyway, since I > > don't think Henrik's tests ran on really crappy connections. > > > > Unless we are fully multithreaded, doing this really puts a hurting > on those systems with limited numbers of fd's :/ > > > -- > ==================================================================== > Jim Jagielski | jaguNET Access Services > jim@jaguNET.com | http://www.jaguNET.com/ > "Not the Craw... the CRAW!" >