tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Thomas <>
Subject Re: Async servlets and Tomcat 7 vs. mod_jk
Date Tue, 03 May 2011 08:18:26 GMT
On 02/05/2011 22:49, Jess Holle wrote:
> What are the limitations/requirements of using asynchronous servlets in
> Tomcat 7?
> We use Apache and mod_jk to balance load over multiple Tomcats.  I note
> that there is no NIO AJP connector -- only BIO and APR.  I have /no
> /interest in the native APR connectors -- as it's simply /*far* /too
> painful to produce 7 different native builds of this (some on platforms
> with unbelievably horrific linkers).
> What does this mean for us?  Does this mean asynchronous servlets won't
> work for us?  Or that they'll "work", but we won't actually reduce the
> thread usage in Tomcat at all?

They'll work, and you'll see decreased thread usage will all connectors
including HTTP-BIO and AJP-BIO. However, it isn't quite that simple.

With Servlet 3 async requests in progress, Tomcat will have more
concurrent connections than it is using threads to process them since
some of those connections will be associated with async requests that
are currently 'paused'. Since maxThreads still controls the number of
maximum threads then this means BIO now supports more concurrent
connections than there are threads available. This causes a different

When there are more connections than threads, Tomcat needs to know which
connection service next when a thread becomes available. With NIO and
APR this is easy since they support non-blocking IO. Add all the
connections to a poller and then poller will signal when one of the
connections has some data to read. This connection can then be passed to
a thread for processing. With BIO pollers simply aren't available. The
current BIO implementation adds the connections to a queue and they are
processed in turn as threads become available. This can create some
unepxected behaviours. Consider the following:
maxThreads = 200
connectionTimeout = 5000
connections = 1000, all in http keep-alive

In the worst case scenario if those 1000 connections enter keep-alive
around same time, a new connection is going to wait more than 25 seconds
to be processed. Let me explain why:
1001 connections in keep-alive, 200 of which are assigned to threads
5s later, those 200 threads timeout the connection and are passed the
next 200 connections to process
now 801 connections in keep-alive, 200 of which are assigned to threads
another 5 seconds, those 200 threads timeout
now 601 connections
another 5s...
now 401 connections
another 5s...
now 201 connections
another 5s
now 1 connection left in the queue (the new one) this now gets processed.

So despite there being nothing to process in those 1000 keep-alive
connections, the new connection where there was data took 25s+ to process.

Part of this is due to a TODO in the BIO connector (that is fixed for
the next release). Timeouts where calculated from when the thread
started processing the connection, not from when the connection entered
the kepp-alive state (i.e. when it was added to the processing queue).
With this bug fixed, the new connection in the scenario above would wait
just over 5 seconds.

I am currently working on simulating polling with the BIO connector. The
good news is that it works but the price is significantly increased CPU
usage. In my tests CPU usage increased from ~38% to 51% when simulated
polling is used.

I haven't completed the patch and once committed the other committers
may not like it so keep an eye on the dev list for the eventual solution
but I am currently working towards something along the lines of:
maxThreads defaults to 200 (as it does now)
maxConnections defaults to maxThreads (currently defaults to 10000)
maxConnections <=maxThreads do nothing
maxConnections > maxThreads enable pollTime (new feature, defaults to 100ms)

The recommendation will be not to set maxConnections > maxThreads
without a lot of performance testing and to use NIO or APR instead if at
all possible.

pollTime is how long a thread waits for data on a connection before
putting it back in the connection queue. By using this, Tomcat
effectively scans through the current connections looking for data to
process. Gong back to my example above this would mean the new
connection waiting ~0.5s before being processed. Not ideal, but better
than the 5s it was.

As I write this it occurs to me that an AJP-NIO connector would be a big
help to you here. I don't know how much work that would be to write but
with the refactoring already completed for Tomcat 7 it might be as
little as 1000 lines of code. If you would be interested in such a
connector create an enhancement request in Bugzilla.

> In Apache, I note that there are noises about broader/better support for
> the mod_event MPM worker.  Does mod_jk work with mod_event there to
> reduce the threads required in Apache?

Sorry, don't know. I have relatively little to do with the native mod_jk


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message