httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Plüm, Rüdiger, VF-Group" <>
Subject RE: Proxy regressions
Date Wed, 10 Nov 2010 11:09:00 GMT

> -----Original Message-----
> From: Graham Leggett 
> Sent: Mittwoch, 10. November 2010 11:47
> To:
> Subject: Re: Proxy regressions
> On 10 Nov 2010, at 11:49 AM, Plüm, Rüdiger, VF-Group wrote:
> >> Have we not created a pool lifetime problem for ourselves here?
> >>
> >> In theory, any attempt to read from the backend connection should
> >> create buckets allocated from the r->connection->bucket_alloc
> >> allocator, which should be removed from the backend connection when
> >> the backend connection is returned to the pool.
> >
> > I guess we need a dedicated bucket_allocator at least in 
> the beginning
> > as we cannot guarantee that anyone in the create_connection 
> hook uses
> > the bucket_allocator to create an object that should 
> persist until the
> > connrec of the backend connection dies.
> >
> > Exchanging the allocator later each time we get the connection from
> > the conn pool might create similar risks. But I admit that the later
> > is only a gut feeling and I simply do not feel well with 
> exchanging  
> > the
> > allocator. I have no real hard facts why this cannot be done.
> The proxy currently creates the allocator in  
> ap_proxy_connection_create(), and then passes the allocator to the  
> various submodules via the ap_run_create_connection() hook, so it  
> looks like we just passing the wrong allocator.

The problem is that we keep the connrec structure in the conn pool.
It is not created each time we fetch a connection from the conn pool.
This is required to enable keepalives with SSL backends.
As said if we pass the bucket allocator from the front end connection
we possibly end up with other pool lifetime issues and as I speak of
it SSL comes to my mind.

> > So without trying to offend anyone, can we see the use case 
> for the  
> > asap returning
> > again?
> Right now, we are holding backend connections open for as long as it  
> takes for a frontend connection to acknowledge the request. A 
> typical  
> backend could be finished within milliseconds, while the 
> connection to  
> the frontend often takes hundreds, sometimes thousands of  
> milliseconds. While the backend connection is being held open, that  
> slot cannot be used by anyone else.

Used by whom? As said if you put it back in the pool and your pool has the
same max size as the number of threads in the process then there is some
chance that this connection will idle in the pool until the actual thread
sent data to the client and fetches the connection from the pool again.
As said I can only follow if the max pool size is configured to be smaller
than the number of threads in the process. Do you do this?

Another possibility would be that depending on the request behaviour
on your frontend and the distribution between locally handled requests
(e.g. static content, cache) and backend content it might happen that
the number of actual backend connections in the pool does not increase that
much (aka. to its max size) if the connection is returned to the pool asap.
Do you intend to get this effect?

> In addition, when backend keepalives are kept short (as ours 
> are), the  
> time it takes to serve a frontend request can exceed the keepalive  
> timeout, creating unnecessary errors.

Why does this create errors? The connection is released by the backend
because it has delivered all data to the frontend server and has not received
a new request within the keepalive timeframe. So the backend is actually
free to reuse these resources. And the frontend will notice that the backend
has disconnected the next time it fetches the connection again from the pool
and will establish a new connection.

> This issue is a regression that was introduced in httpd v2.2, httpd  
> 2.0 released the connection as soon as it was done.

Because it had a completly different architecture and the released connection
was not usable by anyone else but the same frontend connection because it was stored
in the conn structure of the frontend request. So the result with 2.0 is the same
as with 2.2.



View raw message