apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Bannert <aa...@clove.org>
Subject Re: [proposal] apr_thread_setconcurrency()
Date Mon, 17 Sep 2001 02:11:41 GMT
On Sun, Sep 16, 2001 at 04:12:58PM -0700, Justin Erenkrantz wrote:
> > But of course that case is not terribly relevant for something like
> > httpd-2.0 on a big SMP box, where the optimal case (of which there are
> > many dimentions) can not be known to the underlying thread/LWP creation
> > agent. That is the key issue at hand here. We, as _users_ of this API
> > would like to maximize each of {requests/second, time/request, number of
> > simultaneous connections} where the LWP creation agent is just trying to
> > get the work done with the least amount of context switching. The dials it
> > has to play with are numerous, and so it must perform a delicate linear
> > programming task in an attempt to meet the same goals as the application
> > programmer. I don't claim that setconcurrency is the way to reduce the
> > number of variables in this equation, but I do suggest we may want to
> > take this into consideration when trying to make our threaded algorithms
> > work the way we expect them to.
> I just don't think it is going to get us what you want.  I think
> the net result with setconcurrency on Solaris with LWPs is to 
> circumvent its balancing algorithms so that it creates too many 
> LWPs.  I think this is the wrong way to attack this problem and 
> goes against the design of their thread library.  On all other 
> platforms (and with bound thread impl on Solaris), setconcurrency 
> is an ignored hint.  -- justin

The only platforms that I know about that have a two-level thread model
are AIX and Solaris. The single-level thread libs ignore setconcurrency
because every thread is what solaris calls a "bound thread", or a kernel
scheduled entity (it gets it's own process slot). The only exceptions
to this rule are fully userspace thread libs, where setconcurrency is
inherently maximized at 1.


View raw message