httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Plüm, Rüdiger, VF-Group" <ruediger.pl...@vodafone.com>
Subject RE: Proxy regressions
Date Wed, 10 Nov 2010 13:54:59 GMT
 

> -----Original Message-----
> From: Graham Leggett 
> Sent: Mittwoch, 10. November 2010 14:05
> To: dev@httpd.apache.org
> Subject: Re: Proxy regressions
> 
> On 10 Nov 2010, at 1:09 PM, Plüm, Rüdiger, VF-Group wrote:
> 
> >> The proxy currently creates the allocator in
> >> ap_proxy_connection_create(), and then passes the allocator to the
> >> various submodules via the ap_run_create_connection() hook, so it
> >> looks like we just passing the wrong allocator.
> >
> > The problem is that we keep the connrec structure in the conn pool.
> > It is not created each time we fetch a connection from the 
> conn pool.
> > This is required to enable keepalives with SSL backends.
> > As said if we pass the bucket allocator from the front end 
> connection
> > we possibly end up with other pool lifetime issues and as I speak of
> > it SSL comes to my mind.
> 
> This doesn't sound right to me - ideally anything doing a read of  
> anything that will ultimately be sent up the filter stack should use  
> the allocator belonging to the frontend connection. When the backend  
> connection is returned to the pool, the allocator should be removed,  
> and the next allocator inserted when the backend connection is  
> subsequently reused.
> 
> Currently what we seem to have is data allocated out of a pool that  
> has a lifetime completely unrelated to the frontend request, 
> and then  
> we're working around this by trying to keep this unrelated 
> pool alive  
> way longer than it's useful lifetime, and at least as long as the  
> original request. This seems broken to me, we should really be using  
> the correct pools all the way through.

As said this sounds doable for http backends, but not for https backends
where we need to keep some data regarding the SSL state in the conn rec
of the backend connection.

> 
> >> Right now, we are holding backend connections open for as 
> long as it
> >> takes for a frontend connection to acknowledge the request. A
> >> typical
> >> backend could be finished within milliseconds, while the
> >> connection to
> >> the frontend often takes hundreds, sometimes thousands of
> >> milliseconds. While the backend connection is being held open, that
> >> slot cannot be used by anyone else.
> >
> > Used by whom?
> 
> Another worker in httpd.
> 
> > As said if you put it back in the pool and your pool has the
> > same max size as the number of threads in the process then 
> there is  
> > some
> > chance that this connection will idle in the pool until the actual  
> > thread
> > sent data to the client and fetches the connection from the pool  
> > again.
> > As said I can only follow if the max pool size is configured to be  
> > smaller
> > than the number of threads in the process. Do you do this?
> 
> Yes. Threads in an application server are expensive, while 
> threads in  
> httpd are cheap.
> 
> A further issue is with backend servers where keepalive is switched  
> off. Instead of acknowledging the connection close and releasing the  
> connection, we hold the connection open for ages until the client  
> finally acknowledges the request as finished.

Is this a problem of a too long lingering close period on the backend server
blocking the expensive backend server threads?
I mean in general the backend server is the one who closes the connection
if its keepalive timeout was used up and hence it can close the socket
from its side. The only thing that comes to mind that could keep it
blocked is a lingering close. Is this the case here?

> 
> >> This issue is a regression that was introduced in httpd v2.2, httpd
> >> 2.0 released the connection as soon as it was done.
> >
> > Because it had a completly different architecture and the released  
> > connection
> > was not usable by anyone else but the same frontend connection  
> > because it was stored
> > in the conn structure of the frontend request. So the result with  
> > 2.0 is the same
> > as with 2.2.
> 
> In v2.0, it was only saved in the connection if a keepalive was  
> present. If there was no keepalive, it was released immediately.

Which resulted in no connection pooling at all.

Regards

Rüdiger


Mime
View raw message