httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefan Eissing <>
Subject Re: buckets across threads - question
Date Fri, 20 Mar 2015 13:19:43 GMT

the positive effect of the recycling of subpools by the allocator is measureable in performance
by me. Without a mutex'ed allocator however, the destruction of subpools did crash on me (but
it might also be another bug in my code, of course). First, I resolved to using root pools
for requests. That did work, however no recycling happened there, of course.

With the mutex'ed allocator, it works nicely to have subpools for requests in different threads.
The recycling happens and the performance drastically increases (in my tests, I have many
small GETs on a single connection).

Looking at the code in httpd, mpm_worker and mpm_event seem to create own alloctors for the
pools they use and those allocators are not protected by a mutex. And need not, since normally
a connection and all its sub-requests run in the same thread, right?



I'll explain a bit how mod_h2 works now in principle (without code), since you asked:

1. a) There are connection filters like in mod_spdy for registering in ALPN negotiations and
taking over processing.
   b) there is a request filter for non-TLS connections that looks for "Upgrade:" headers
and tokes over if the correct "h2c" protocol is mentioned (and no request body is present).
An intermediate 101 response is sent and the h2c processing basically takes place on the connection
of the request. This works basically as the websocket module does it.

2. When h2/h2c is enabled on a connection, a h2_session instance is created that performs
the HTTP/2 state and administrative processing. When no sub-requests are onoing, it does blocking
reads. Otherwise it does non-blocking reads on connection and sub-request outputs with a backoff
 - libnghttp2 lives isolated in this h2_session, the session is only active in the initial
connection thread (t0)
 - the session opens a new apr_pool_t with its own mutex'ed allocator.

3. When a new sub-request arrives (stream in http2), the session creates a new subpool and
does all allocations belonging to the sub-request with this subpool. This is all staying inside
t0 as well. All this is represented by a h2_stream. The new subpools are necessary so that
long, long, long living connections do not eat up the memory.

4. For the handling of sub-requests in other threads (tn), each sub-request gets another subpool
that is only used in tn. The object representing this is a h2_task. It is given to a worker

5. For the handling of sub-requests, a h2_task creates a new conn_rec, populates it with enough
data from master to make httpd core sufficiently happy to parse and run the request on this
connection. The new connection gets its own input/output filters and mod_ssl is disabled on
 - Here is a lot of plumbing done that should some day no longer be necessary. The complete
sub-requests with all headers do basically exist, but are serialized into HTTP/1 format again
for the core parsers. Attempts to call something like ap_run_request(r) directly have failed
so far.
 - mpm_event does currently not play nice since it has it's special event_conn_state structure
hidden behind the normal connection state. I have written a hack that make mod_h2 run in mpm_event,
but that makes assumptions about conn_state. An function in mpm that just sets up a "slave"
conn_rec, given a master, would be helpful here. ap_run_create_connection() is  not enough.

6. Data Transfers
Request body data needs to be transferred from session to tasks, or thread-wise speaking from
t0 to tn and response bodies the other way around. Here I am currently working to replace
my previous heap allocated buffers with apr_bucket_brigades. I hope with my safe allocator
and the proper instance of apr_bucket_alloc_t, I can make this work. Then mod_h2 would transfer
data uncopied between sub-requests to the main session processing.

The session has a special objects for transferring data between threads and session that has
its own mutex for synchronization and some conditionals for write/read blocking handling.
It also controls memory consumption by blocking sub-request responses when a certain amount
has been buffered.

Overall, we have:

Main players:
  conn_rec *master 1--->1 h2_session 1--->n* h2_stream 1--->1* h2_task

  master->pool       from the main connection
  h2_seession->pool  subpool of master->pool with mutex'ed alloc
  h2_stream->pool    subpool of h2_session->pool
  h2_task->pool      subpool of h2_session->pool

  1 for apr_allocator_t
  1 for session/task io sync

Memory/CPU Footprint: each session has a maximum number of open sub-requests (configurable)
which are executed by a worker pool (min/max per child configurable). Memory limit is buffered
memory per sub-request output (configurable) and maximum input window controlled by HTTP/2
flow control  (configurable).

Some clients may keep such connections open for a long time. Here some fine-tuning/configurations
might become handy (timeout). From a resource perspective will be no difference to a long
living http/1 connection: if no sub-requests are open, all subpools are destroyed and the
session performs a blocking read on its main connection.

> Am 20.03.2015 um 12:44 schrieb Yann Ylavic <>:
> More thoughts...
> On Fri, Mar 20, 2015 at 12:00 PM, Yann Ylavic <> wrote:
>> While pool (allocator) allocations are not thread-safe, creating a
>> subpool is, and each thread can than use its own pool (this model is
>> often used with APR pools).
>> This tells nothing about pool allocated objects' lifetime though
>> (accross/after threads), so maybe can you describe a bit more (but
>> less than the code ;) which object has its own or shared pool, with
>> regard to http/2 model (request/stream/connection/frame)?
> When a subpool is destroyed, it is also recycled by the allocator for
> further use, and it can also be cleared-then-recycled explicitly (hand
> made, with locking).
> It shouldn't be an issue to create a (sub)pool per http/2 entity
> (request/stream/connection/frame), the lifetime can even be
> controlled/terminated by special (apr_)buckets (destructors), which
> when sent through the filters and handled by the last (core) filter,
> will be cleared (eg. in httpd, when the request is finished, an EOR
> bucket is sent to output filters and will destroy the request once
> (not before!) it is fully forwarded, or an error occured (thus the
> final brigade containing the special bucket is destroyed, explicitly
> or with its pool, and so is the bucket).
> With this model, it's "just" a matter of that entity belongs to that
> other one (destroyed with it, and possibly cleared-than-recycled in
> the meantime for special needs).
> Can't that feet to http/2 model?

<green/>bytes GmbH
Hafenweg 16, 48155 Münster, Germany
Phone: +49 251 2807760. Amtsgericht Münster: HRB5782

View raw message