httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Bannert <>
Subject Re: [PATCH] worker MPM: reuse transaction pools
Date Tue, 28 Aug 2001 21:41:46 GMT
On Tue, Aug 28, 2001 at 12:17:03PM -0700, Greg Stein wrote:
> On Mon, Aug 27, 2001 at 05:09:01PM -0700, Aaron Bannert wrote:
> > This patch implements a resource pool of context pools -- a queue of
> > available pools that the listener thread can pull from when accepting
> > a request. The worker thread that picks up that request then uses
> > that pool for the lifetime of that transaction, clear()ing the pool
> > and releasing it back to what I'm calling the "pool_queue" (har har).
> > This replaces the prior implementation that would create and destroy
> > a transaction pool for each and every request.
> > 
> > I'm seeing a small performance improvement with this patch, but I suspect
> > the fd_queue code could be improved for better parallelism. I also
> > suspect that with better testing this algorithm may prove more scalable.
> What does "small" mean?
> I can't believe it is all that large. Pool construction/destruction is
> actually quite fast. The bulk of the time is clearing the pool, which you
> must do anyways. I don't see how a pool queue can provide any benefit.
> IOW, why should this complexity be added? Just how much does it improve
> things, and are you testing on a single or multi processor machine?
> Cheers,
> -g
> p.s and yes, I know Ryan just applied it, but that doesn't mean it should
> stay there :-)

Honestly, I can't give you any quantative results right now, as I don't
have a very good load-testing environment set up. By "small" I mean, using
'ab' with various levels of concurrency showed a possible improvement (on
my single CPU machine), definately no loss of efficiency. If anyone out
there could give me some results from before and after on some MP machine
(4way or more preferably) then that would be very useful.

I have an alternative that I've been working on. It's basicly a thread pool
where N threads are created and stuffed into some queue. Each element in
the queue contains: a mutex and condition variable, some state variable
(and int), a pointer to an apr_socket_t, and a pool. As the listener
prepares to accept a waiting request, it pops an element off the queue,
uses that pool to do the accept, sets the socket and signals the condition.
That thread then takes off, handles the request, clear()s the pool, and
returns itself back to the queue. (As soon as the queue is empty, the listener
can block until another element becomes available).

The benefits of this scheme over the current are:
1) less of the code is within critical sections, so more parallelism
2) less contention on the mutex that does the conditions, so more scalable
3) we get to keep the benefits of reusable transaction pools (which might
   be able to be optimized further with an SMS algorithm tuned specifically
   to the typical blocks needed for a single HTTP request transaction)

Time permitting, I will try to post a patch illustrating this later today.


View raw message