tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: bindOnInit and maxConnections for AJP connectors
Date Fri, 08 Apr 2011 15:25:04 GMT

On 4/8/2011 4:50 AM, Tim Whittington wrote:
> On Fri, Apr 8, 2011 at 2:40 AM, Christopher Schultz
> <> wrote:
>> Mark,
>> I understand that a fix has already been applied, but...
>> On 4/6/2011 7:16 AM, Mark Thomas wrote:
>>> I thought of two options for issue 3:
>>> a) Assign a processor (+ inputbuffer, output buffer etc.) to a socket
>>> and don't recycle it until the socket is closed.
>>> - Increases memory requirements.
>>> - Fixes issue 3
>>> - Retains current request processing order.
>>> b) Check the input buffer at the end of the loop in
>>> Http11Processor#process() and process the next request if there is any
>>> data in the input buffer.
>>> - No Increase in memory requirements.
>>> - Fixes issue 3
>>> - Pipelined requests will get processed earlier (before they would have
>>> been placed at the back of the request processing queue)
>>> I think option b) is the way to go to fix issue
>> What about a third option that is essentially (a) except that you trim
>> the input buffer and discard the already-processed request(s) from it.
>> The input buffer stays bound to the request and continues where it
>> left-off when another request processor becomes available.
>> That would maintain scheduling fairness and hopefully not require much
>> in the way of memory usage. Since pipelining requests with entities is
>> not a good idea, the chances of getting a large amount of data in the
>> input buffer is relatively low. There's also a limit on the size of that
>> buffer, too.
>> Would that still represent too large of a memory requirement?
> The input buffer is 8k by default (max header size), so this could be
> significant with a large maxConnections.
> Pruning the buffers to retain only required space would generate a lot
> of garbage, so probably not a good option either.

Are those buffers ever discarded? I guess it comes down to whether the
8k buffer "belongs" to the connection or to the request. It looks like
the bug arises from the buffer being treated like it belongs to the
request when it really belongs to the connection.

I agree, switching to a request-owned buffer strategy would certainly
increase the memory footprint since you'd need a buffer for each pending
request (which may be quite high when using NIO an/or async). Thanks for
clarifying that.

> If the fairness becomes a practical problem, reducing
> maxKeepAliveRequests (100 by default) would force pipelining clients
> to the back of the queue regularly.

How would this work, though? If the bug under discussion arises from a
connection essentially disconnecting one of these buffers from the
request whence it came, doesn't re-queuing the request re-introduce the bug?


View raw message