tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: AJP connection pool issue bug?
Date Wed, 04 Oct 2017 21:01:17 GMT
Hash: SHA256


On 10/4/17 3:45 PM, TurboChargedDad . wrote:
> Perhaps I am not wording my question correctly.

Can you confirm that the connection-pool exhaustion appears to be
happening on the AJP client (httpd/mod_proxy_ajp) and NOT on the
server (Tomcat/AJP)?

If so, the problem will likely not improve by switching-over to an
NIO-based connector on the Tomcat side.

Having said that, the real problem is likely to be simple arithmetic.
Remember this expression:

Ctc = Nhttpd * Cworkers

Ctc = Connections Tomcat should be prepared to accept (e.g. Connector

Nhttpd = # of httpd servers
Cworkers = total # of connections in httpd connection pool for all

Imagine the following scenario:

Nhttpd = 2
Cworker = 200
Ntomcat = 2

On httpd server A, we have a connection pool with 200 connections. If
Tomcat A goes down, all 200 connections will go to Tomcat B. If that
happens to both proxies (Tomcat A stops responding), then both proxies
will send all 200 connections to Tomcat B. That means that Tomcat B
needs to be able to support 400 connections, not 200.

Let's say you now have 5 workers (1 for each application). Each worker
gets its own connection pool, and each connection pool has 200 workers
in it. Now, we have a situation where each httpd instance actually has
1000 (potential) connections in the connection pool, and if Tomcat A
goes down, Tomcat B must be able to handle 2000 connections (1000 from
httpd A and 1000 from httpd B).

At some point, you can't provision enough threads to handle all of
those connections.

The solution (bringing this back around again) is to use NIO, because
you can handle a LOT more connections with a lower number of threads.
NIO doesn't allow you to handle more *concurrent* traffic (in fact, it
makes performance a tiny bit worse than BIO), but it will allow you to
have TONS of idle connections that aren't "wasting" request-processing
threads that are just waiting for another actual request to come
across the wire.

> As a test I changed the following line in one of the many tomcat
> instances running on the server and bounced it. Old <!--
> <Connector port="9335" protocol="AJP/1.3" redirectPort="8443" 
> maxThreads="300" /> --> New <Connector port="9335"
> protocol="org.apache.coyote.ajp.AjpNioProtocol" redirectPort="8443"
> maxThreads="300" />

Yep, that's how to do it.

> As the docs state I am able to verify that it did in fact switch
> over to NIO.
> INFO: Starting ProtocolHandler ["ajp-nio-9335"]

Good. Now you can handle many idle connections with the same number of

> Will running NIO and BIO on the same box have a negative impact?


> I am thinking they should all be switched to NIO, this was just a 
> test to see if I was understanding what I was reading.
I would recommend NIO in all cases.

> That being said I suspect there are going to be far more tweaks
> that needs to be applied as there are none to date.

Hopefully not. A recent Tomcat (which you don't actually have) with a
stock configuration should be fairly well-configured to handle a great
deal of traffic without falling-over.

> I also know that the HTTPD server is running in prefork mode.
That will pose some other issues for you, mostly the ability to handle
bursts of high concurrency from your clients. You can consider it
out-of-scope for this discussion, though. What we want to do for you
is stop httpd+Tomcat from freaking out and getting stopped-up with
even a small number of users.

> Which I think leaves me with no control over how many connections
> can be handed back from apache on a site by site basis.

Probably not on a site-by-site basis, but you can adjust the
connection-pool size on a per-worker basis. For prefork it MUST BE
connection_pool_size=1 (the default for prefork httpd) and for
"worker" and similarly-threaded MPMs the default should be fine to use.

> Really having hard time explaining to others how BIO could have 
> caused the connection pool for another use to become exhausted.

If one of your Tomcats locks-up (database is dead; might want to check
to see how the application is accessing that... infinite timeouts can
be a real killer, here), it can tie-up connections from
mod_proxy_ajp's connection pool. But those connections should be
per-worker and shouldn't interfere with each other. Unless you have an
uber-worker that handles everything for all those various Tomcats.

Can you give us a peek at your worker configuration? You explained it
a bit in your first post, but it might be time for some more details...

- -chris
Comment: GPGTools -
Comment: Using GnuPG with Thunderbird -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message