httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ruediger Pluem <rpl...@apache.org>
Subject Re: strange usage pattern for child processes
Date Sat, 18 Oct 2008 14:42:10 GMT


On 10/18/2008 03:18 PM, Graham Leggett wrote:
> Ruediger Pluem wrote:
> 
>> The code Graham is talking about was introduced by him in r93811 and was
>> removed in r104602 about 4 years ago. So I am not astonished any longer
>> that I cannot remember this optimization. It was before my time :-).
>> This optimization was never in 2.2.x (2.0.x still ships with it).
>> BTW: This logic flow cannot be restored easily, because due to the
>> current
>> pool and bucket allocator usage it *must* be ensured that all buckets
>> are flushed down the chain before we can return the backend connection
>> to the connection pool. By the time this was removed and in 2.0.x we do
>> *not* use a connection pool for backend connections.
> 
> Right now, the problem we are seeing is that the expensive backends are
> tied up until slow frontends eventually get round to completely
> consuming responses, which is a huge waste of backend resources.
> 
> As a result, the connection pool has made the server slower, not faster,
> and very much needs to be fixed.

I agree in theory. But I don't think so in practice.

1. 2.0.x behaviour: If you did use keepalive connections to the backend
   the connection to the backenend was kept alive and as it was bound to the
   frontend connection in 2.0.x it couldn't be used by other connections.
   Depending on the backend server it wasted the same number of resources
   as without the optimization (backend like httpd worker, httpd prefork) or
   a small amount of resources (backend like httpd event with HTTP or a recent
   Tomcat web connector). So you didn't benefit very well from this optimization
   in 2.0.x as long as you did not turn off the keepalives to the backend.

2. The optimization only helps for the last chunk being read from the backend
   which has the size of ProxyIOBufferSize at most. If ProxyIOBuffer size isn't
   set explicitly this amounts to just 8k. I guess if you are having clients
   or connections that take a long time to consume just 8k you are troubled
   anyway. Plus the default socket and TCP buffers on most OS should be already
   larger then this. So in order to profit from the optimization the time
   the client needs to consume the ProxyIOBufferSize needs to be remarkable.
   When using a lot of backend connections on the frontend server using a
   high ProxyIOBufferSize consumes some larger amount of memory (which might
   be suitable as RAM isn't that expensive anymore).
   So currently the only thing you can do to get the effect of the optimization
   is to increase the Sendbuffer size in parallel to store the stuff in the
   kernel memory backed socket buffer, which leads to a memory consumption that
   is twice of the size as without the optimization.

Regards

RĂ¼diger



Mime
View raw message