tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jesse Barnum <>
Subject Re: What is the best connector configuration for thousands of mostly idle users?
Date Mon, 10 Feb 2014 16:04:58 GMT
On Feb 7, 2014, at 1:11 PM, Mark Thomas <> wrote:

>> This is a single core box (sorry, should have mentioned that in the configuration
details). Would you still expect increasing the worker thread count to help?
> Yes. I'd return it to the default of 200 and let Tomcat manage the pool.
> It will increase/decrease the thread pool size as necessary. Depending
> on how long some clients take to send the data, you might need to
> increase the thread pool beyond 200.
> Mark

Unfortunately, this has made the problem worse.

We are now getting site failure messages from our monitoring software more frequently, and
outside of peak hours, and CPU usage is running much higher than normal.

Looking at the manager page shows 76 threads busy out of 200, and YourKit shows that many
threads (I'm assuming 76-1) are stuck at this point:

> ajp-nio-8009-exec-148 [WAITING] CPU time: 0:50
> sun.misc.Unsafe.park(boolean, long)
> java.util.concurrent.locks.LockSupport.parkNanos(Object, long)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int, long)
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int, long)
> java.util.concurrent.CountDownLatch.await(long, TimeUnit)
>$KeyAttachment.awaitLatch(CountDownLatch, long,
>$KeyAttachment.awaitReadLatch(long, TimeUnit)
>, NioChannel, long)
>, NioChannel, Selector, long,
>, NioChannel, Selector, long)
> org.apache.coyote.ajp.AjpNioProcessor.readSocket(byte[], int, int, boolean)
>[], int, int, boolean)
> org.apache.coyote.ajp.AjpNioProcessor.readMessage(AjpMessage, boolean)
> org.apache.coyote.ajp.AjpNioProcessor.receive()
> org.apache.coyote.ajp.AbstractAjpProcessor.refillReadBuffer()
> org.apache.coyote.ajp.AbstractAjpProcessor$SocketInputBuffer.doRead(ByteChunk, Request)
> org.apache.coyote.Request.doRead(ByteChunk)
> org.apache.catalina.connector.InputBuffer.realReadBytes(byte[], int, int)
> org.apache.tomcat.util.buf.ByteChunk.substract(byte[], int, int)
>[], int, int)
>, OutputStream, int)

Almost all requests to the site are POST operations with small payloads. My theory, based
on this stack trace, is that all threads are in contention for the single selector thread
to read the contents of the POST, and that as the number of worker threads increases, so does
thread contention, reducing overall throughput. Please let me know whether this sounds accurate
to you.

If so, how do I solve this? Here are my ideas, but I'm really not familiar enough with the
connector configurations to know whether I'm on the right track or not:
* Set '' property to false. It sounds like this
would give each worker thread concurrent access to the POST requests, although I can't quite
tell from the documentation if that's true.
* Re-write my client application to use multiple GET requests instead of single POST requests.
This would be a lot of work, and seems like it should not be necessary.
* Ditch the NIO connector and Apache/SSL front-end and move to APR/SSL with a whole lot of
threads. Also seems like it should not be necessary; I thought my use case is exactly what
NIO is made for.

I'm open to any other ideas, thank you for all of your help!

--Jesse Barnum, President, 360Works
To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message