httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ian Holsman <i...@apache.org>
Subject Re: Bucket management strategies for async MPMs?
Date Wed, 04 Sep 2002 02:47:10 GMT
Paul J. Reder wrote:
> 
> 
> Brian Pane wrote:
> 
>> I've been thinking about strategies for building a
>> multiple-connection-per-thread MPM for 2.0.  It's
>> conceptually easy to do this:
>>
>>  * Start with worker.
>>
>>  * Keep the model of one worker thread per request,
>>    so that blocking or CPU-intensive modules don't
>>    need to be rewritten as state machines.
>>
>>  * In the core output filter, instead of doing
>>    actual socket writes, hand off the output
>>    brigades to a "writer thread."
> 
> 
> 
> During a discusion today, the idea came up to have the
> code check if it could be written directly instead of
> always passing it to the writer. If the whole response
> is present and can be successfully written, why not save
> the overhead. If the write fails, or the response is too
> complex, then pass it over to the writer.
> 
> 
>>
>>  * As soon as the worker thread has sent an EOS
>>    to the writer thread, let the worker thread
>>    move on to the next request.
> 
> 
> 
> I have a small concern here. Right now the writes are
> providing the throttle that keeps the system from generating
> so much queued output that we burn system resources. If
> we allow workers to generate responses without a throttle,
> it seems possible that the writer's queue will grow to the
> point that the system starts running out of resources.
> 
maybe if we something like the queue (apr-util/misc/apr_queue.c)
to submit the write requests, we could limit the number of outstanding 
writes to X, with the threads sleeping when the queue gets full.

I'm actually working on a dynamicly growing thread pools which would 
read the queue, and adjust the number of threads based on the size of 
the queue (eventually I want to adjust the number of threads based on
the response time)

If anyone is interested (currently buggy) code, I'll put it up on 
webperf somewhere
> Only testing will show for sure, and maybe in the real world
> it would only happen for brief periods of heavy load, but it
> seems like we need some sort of writer queue thresholding
> with pushback to control worker throughput.
> 
> Of course, if we do add a throttle for the workers, then how
> does this really improve things? The writer was the throttle
> before and it would be again. We've added an extra queue so
> there will be a period of increased worker output until the
> queue threshold is met but, once the queue is filled, we revert
> to the writer being the throttle. The workers cannot finish
> their current response until the writer has finished writing
> a queued response and freed up a queue slot.
> 
>>
>>  * In the writer thread, use a big event loop
>>    (with /dev/poll or RT signals or kqueue, depending
>>    on platform) to do nonblocking writes for all
>>    open connections.
>>
>> This would allow us to use a much smaller number of
>> worker threads for the same amount amount of traffic
>> (at least for typical workloads in which the network
>> write time constitutes the majority of each requests's
>> duration).
>>
>> The problem, though, is that passing brigades between
>> threads is unsafe:
>>
>>  * The bucket allocator alloc/free code isn't
>>    thread-safe, so bad things will happen if the
>>    writer thread tries to free a bucket (that's
>>    just been written to the client) at the same
>>    time that a worker thread is allocating a new
>>    bucket for a subsequent request on the same
>>    connection.
>>
>>  * If we delete the request pool when the worker
>>    thread finishes its work on the request, the
>>    pool cleanup will close the underlying objects
>>    for the request's file/pipe/mmap/etc buckets.
>>    When the writer thread tries to output these
>>    buckets, the writes will fail.
>>
>> There are other ways to structure an async MPM, but
>> in almost all cases we'll face the same problem:
>> buckets that get created by one thread must be
>> delivered and then freed by a different thread, and
>> the current memory management design can't handle
>> that.
>>
>> The cleanest solution I've thought of so far is:
>>
>>  * Modify the bucket allocator code to allow
>>    thread-safe alloc/free of buckets.  For the
>>    common cases, it should be possible to do
>>    this without mutexes by using apr_atomic_cas()
>>    based spin loops.  (There will be at most two
>>    threads contending for the same allocator--
>>    one worker thread and the writer thread--so
>>    the amount of spinning should be minimal.)
>>
>>  * Don't delete the request pool at the end of
>>    a request.  Instead, delay its deletion until
>>    the last bucket from that request is sent.
>>    One way to do this is to create a new metadata
>>    bucket type that stores the pointer to the
>>    request pool.  The worker thread can append
>>    this metadata bucket to the output brigade,
>>    right before the EOS.  The writer thread then
>>    reads the metadata bucket and deletes (or
>>    clears and recycles) the referenced pool after
>>    sending the response.  This would mean, however,
>>    that the request pool couldn't be a subpool of
>>    the connection pool.  The writer thread would have
>>    to be careful to clean up the request pool(s)
>>    upon connection abort.
>>
>> I'm eager to hear comments from others who have looked
>> at the async design issues.
>>
>> Thanks,
>> Brian
>>
>>
>>
> 
> 



Mime
View raw message