tomcat-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Thomas <>
Subject Re: bindOnInit and maxConnections for AJP connectors
Date Thu, 21 Apr 2011 19:21:23 GMT
On 06/04/2011 22:51, Tim Whittington wrote:
> On Wed, Apr 6, 2011 at 11:16 PM, Mark Thomas <> wrote:
>> On 05/04/2011 10:50, Tim Whittington wrote:
>>> Is what's actually going on more like:
>>> APR: use maxConnections == pollerSize (smallest will limit, but if
>>> pollerSize < maxConnections then the socket backlog effectively won't
>>> be used as the poller will keep killing connections as they come in)
>>> NIO: use maxConnections to limit 'poller size'
>>> HTTP: use maxConnections. For keep alive situations, reduce
>>> maxConnections to something closer to maxThreads (the default config
>>> is 10,000 keepalive connections serviced by 200 threads with a 60
>>> second keepalive timeout, which could lead to some large backlogs of
>>> connected sockets that take 50 minutes to get serviced)

This is still an issue. I'm still thinking about how to address it. My
current thinking is:
- BIO: Introduce simulated polling using a short timeout (see below)
- NIO: Leave as is
- APR: Make maxConnections and pollerSize synonyms
- All: Make the default for maxConnections 8192 so it is consistent with
the current APR default.

The other option is dropping maxConnections entirely from the NIO and
APR connectors. That would align the code with the docs. The only
downside is that the NIO connector would no longer have an option to
limit the connections. I'm not sure that is much of an issue since I
don't recall any demands for such a limit from the user community.

>> There are a number of issues with the current BIO implementation.
>> 1. Keep-alive timeouts

>> 2. The switch to a queue does result in the possibility of requests with
>> data being delayed by requests without data in keep-alive.
Still TODO.

>> 3. HTTP pipe-lining is broken (this is bug 50957 [1]).

>> The fix for issue 2 is tricky. The fundamental issue is that to resolve
>> it and to keep maxConnections >> maxThreads we need NIO like behaviour
>> from a BIO socket which just isn't possible.
>> Fixing 1 will reduce the maximum length of delay that any one request
>> might experience which will help but that won't address the fundamental
>> issue.
>> For sockets in keepalive, I considered trying to fake NIO behaviour by
>> using a read with a timeout of 1ms, catching the SocketTimeoutException
>> and returning them to the back of the queue if there is no data. The
>> overhead of that looks to be around 2-3ms for a 1ms timeout. I'm worried
>> about CPU usage but for a single thread this doesn't seem to be
>> noticeable. More testing with multiple threads is required. The timeout
>> could be tuned by looking at the current number of active threads, size
>> of the queue etc. but it is an ugly hack.
>> Returning to the pre [3] approach of disabling keep-alive once
>> connections > 75% of threads would fix this at the price of no longer
>> being able to support maxConnections >> maxThreads.
> Yeah, I went down this track as well before getting to the "Just use
> APR/NIO" state of mind.
> It is an ugly hack, but might be workable if the timeout is large
> enough to stop it being a busy loop on the CPU.
> With 200 threads, even a 100ms timeout would give you a 'reasonable' throughput.
> Even if we do this, I still think maxConnections should be somewhat
> closer to maxThreads than it is now if the BIO connector is being
> used.

We can make the timeout configurable but 100ms shouldn't be too bad. I
have a simple, scaled-down test case and this approach looks good in
terms of throughput but there are a few kinks I need to iron out. Also
the CPU usage looked rather high, although that might be a side-effect
of one of the kinks. If the issues can't be ironed out then we'll need a
major rethink about this - probably heading back towards a TC6 style


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message