apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Justin Erenkrantz <jerenkra...@ebuilt.com>
Subject Re: [proposal] apr_thread_setconcurrency()
Date Sun, 16 Sep 2001 07:55:10 GMT
On Sat, Sep 15, 2001 at 04:43:39PM -0700, Aaron Bannert wrote:

> > If you create too many LWPs, you will lose a lot of optimizations 
> > that are present in Solaris (i.e. handover of a mutex to another 
> > thread in the same LWP - as discussed with bpane on dev@httpd 
> > recently).
> 
> Of course, and that is something the caller needs to take into consideration.
> I'm not forcing you to use it, I just think it needs to be available.

I'm saying that it should never be used.  Simple.  You can't use
that call properly in any real-world case - just like I don't think 
you should call sched_yield ever.  You are attempting to solve a 
problem that is best solved somewhere else - the base operating 
system.

The testlock case doesn't matter because it never hits any of the 
Solaris-defined entry points.  This is a quirk in the OS and I see 
no reason to work around it.  If you want to make testlock do the 
right thing with the Solaris LWP model, use a reader/writer lock
to synchronize the starting of the threads.  This way you guarantee 
that all threads are started before you start execution of the 
tight exclusive loop (which is something that testlock doesn't do 
now).  You are assuming that the threads are created in parallel -
nowhere is that ordering is guaranteed.

> In consideration of your statement here I spend some time reading
> the Solaris 8 libpthread source. On that platform your statement
> here is false. Calling pthread_setconcurrency (or thr_setconcurrency
> for that matter) can only change the number of multiplexed LWPs in
> two ways: either not at all, or by increasing the number. I see
> no way that it acts as a ceiling.

Yes, you are correct and I was wrong - I reread the Solaris Internals 
book on my flight back to LAX today.  It isn't a ceiling.  However, 
the case of creating too many LWPs is completely valid and is brought 
up many times in their discussion of LWPs versus a bound thread model.
Kernel threads are very expensive in Solaris and part of the reason 
that it handles threads well is because it multiplexes the kernel 
threads efficiently.  No other OS I have seen handles threads as
gracefully as Solaris.

My guess is that in Solaris 9 they reworked the kernel thread API to 
be much faster than before so that it achieves similar 
creation/switching/destruction times to the user-space LWP threads.  
If they did that, I believe that it then makes sense to switch to 
bound threads by default.  (I do need to double check that they have 
switched to a bound threads by default in Solaris 9.)

> >                                          This is not a hint, but a 
> > command.  (Yes, the man page for Solaris says that it is a hint, 
> > but it treats it as a command.)
> 
> Sorry, but that's just BS, and I don't know where you get off making such
> bold unfounded statements. Please just go read the source, they match
> the man pages.

I believe SUSv2 called it a "hint" for the general case.  However, in 
this specific implementation (multiplexed kernel threads), it is not 
a hint.  It is a request to have that many LWPs.  If you disagree
with that statement, please look at the code again.

> > - Let the programmer decide.  Awfully bad choice.  Who knows
> > how the system is setup?  What are you optimizing for?
> 
> This is the only choice I proposed, I don't know what the heck you are
> arguing about in these other things. Of course let the programmer decide,
> that's why it's an API!
> 
> I just gave you an example where I would use it: in the worker MPM.
> In that case it would be the number of simultaneous requests I expect
> to serve.

I pointed out that number (simultaneous requests) is a completely 
bogus number to use when dealing with multiplexed kernel threads.
This poor choice is why I don't think this call belongs in APR at all.  
If you would care to claim that the number of simultaneous requests is
the correct number in the context of a multiplexed thread model for
worker, I would be delighted to hear why - you haven't offerred any 
proof as to its validity.  I indicated why I thought that number was
wrong.  I'll repeat it again with a bit more of a technical 
explanation.

Creating all user threads as bound (what you are suggesting for 
worker by calling pthread_setconcurrency with that value) in a 
multiplexed thread model works against the thread model rather than 
with it - this indicates a clash in design.  You want a bound thread 
library, but refuse to use a bound thread library.  

Ideally, most of worker MPM's time will be spent dealing with I/O, so
there is no need to have spurious kernel threads when in such a usage
pattern.  Solaris has a number of safeguards that will ensure that any
runnable thread (kernel or user) will run as quickly as it can and it 
will only create as many kernel threads as are actually dictated by
the load (if there are really 8 threads ready to run, 8 execution
contexts will be available).

With "scheduler activations" (Solaris 2.6+), when a user thread is 
about to block and other user threads are waiting to execute, the
running LWP will pass that unbound (but now blocked) thread off to 
an idle LWP (via doors).  If no free LWPs are available (all LWPs 
are blocked or executing), a new LWP is spawned (via SIGWAITING) 
and the now-blocked unbound user thread is transferred.

This blocked user thread will resume via what Solaris calls "user 
thread activation" - shared memory and a door call which indicates to 
the kernel thread when a user thread is ready for execution (i.e. 
needs the LWP active now because whatever blocked it has now been
unblocked).  So as soon as the message is sent, the kernel will 
reschedule the appropriate LWP.

Okay, back to the original LWP that the user thread was on - it has 
time left on its original quantum because its user thread was about 
to end prematurely, it then searches for a waiting unbound thread to
execute in the remainder of its time.

In the common case of a user thread blocking with a free LWP already 
created, you have saved a kernel context switch (the running LWP 
sticks the user thread in an idle LWP by itself) - this is why this 
M*N implementation can be faster than bound threads.  The context
switch is free and the responsiveness is thus higher.  This also 
causes it to create kernel threads as needed.  

The entire idea of a multiplexed kernel thread model (such as 
Solaris) is to minimize the number of actual kernel threads and 
increase responsiveness.  You would be circumventing that 
decision by creating bound kernel threads that may not be 
actually required due to the actual execution pattern of the code.  
You will also decrease responsiveness because switching between 
threads now becomes a kernel issue rather than a cheap user-space 
issue (which is what Solaris wants to do by default).  However,
you do this in a library that was optimized for mulitple 
user-space threads not bound threads.

I believe if you really want a bound thread implementation, you should
tell the OS you want it - not muck around with an indeterminate API to 
do so that directly circumvents the scheduling/balancing process.

> There you go again with this "OS scheduler" thing that I've never heard
> of. 10 seconds to stabilize is rather long when you consider I have
> already served O(5000) requests.

You are really attempting to make this a personal argument here by
attacking me.  I think this is completely uncalled for and 
inappropriate.

10 seconds isn't a long time for a server that will be up for months 
or years.  And, as you said, you pulled that number (10 seconds) out 
of thin air.  If you can substantiate it with real results, please
provide them.  I don't consider a case of a 10 second delay for the 
OS to properly balance itself with a particular thread model an issue.
And, what is the impact of not having enough LWPs initially?  Were
you testing on a SMP or UP box?  What was the type of CPU load that
was being performed before it was balanced (usr, sys, or iowait)?

You also haven't mentioned how many LWPs it stabilized at after
10 seconds?  Did Solaris choose to add a LWP for each user thread?  
I have a feeling it wouldn't, but I may be wrong.  -- justin


Mime
View raw message