httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Fritsch>
Subject Re: event mpm and mod_status
Date Mon, 07 Jan 2013 22:33:25 GMT
On Monday 07 January 2013, Daniel Lescohier wrote:
> I see that event mpm uses a worker queue that uses a condition
> variable, and it does a condition variable signal when something
> is pushed onto it. If all of the cpu cores are doing useful work,
> the signal is not going to force a context switch out of a thread
> doing useful work, the thread will continue working until it's
> used up it's timeslice, as long as it doesn't end early because it
> becomes blocked on i/o or something like that.  So, in that case,
> the first thread to finish serving it's request and gets to
> ap_queue_pop_something(worker_queue...) will get the item on the
> queue, and it will have hot caches.  On the other hand, if only
> some of the cpu cores are active, but the threads that are active
> are busy in the middle of a request, the condition variable signal
> will wake up one of the idle threads, the thread will be scheduled
> onto an idle core, and that thread can start doing useful
> work.  That thread may have colder caches, but the other cores
> were busy doing useful work any way, so it's worth activating this
> thread on an idle core.

But on the idle core, it may make a difference which thread is used. A 
common pattern with event mpm is that a work thread does only a very 
small piece of work when writing some additional data to the network 
during write completion. It will then block again until 
ap_queue_pop_something(worker_queue...) returns with some more work to 
do. The likelyhood of warm caches is much higher if the same thread 
(or the same set of a small number of threads) always does these small 
bits of work, compared to the case that all (e.g. 30) idle threads 
each doing a small piece of work in a round robin fashion.

Of course, one needs to do real benchmarks when trying to optimize 

> Also, if a worker thread is idle on waiting on
> ap_queue_pop_something(worker_queue...), and the thread was
> unscheduled by the OS process scheduler, when you wake it up
> again, the OS process scheduler won't necessarily schedule it on
> the same cpu core it was on before.  So, it won't necessarily have
> a warm cache after you wake it up again.  You're only guaranteed a
> warm cache if the thread remains scheduled by the OS.  The current
> queue/condvar implementation favors sending new requests to
> threads that remain scheduled by the OS.

View raw message