tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Filip Hanik <fi...@hanik.com>
Subject Re: What is the best connector configuration for thousands of mostly idle users?
Date Mon, 10 Feb 2014 16:14:43 GMT
Jesse, mostly idle users and you wish to conserve resources. Use the
JkOptions +DisableReuse
on the mod_jk module. This will close connections after the request has
been completed. Many will tell you this will slow down your system since
new connections have to be created for each request. Usually, the overhead
of this connection creation on a LAN is worth it. Measure for yourself.
Then you can go back to the regular blocking AJP connector, that will
perform a bit better as it doesn't have to do polling.




On Mon, Feb 10, 2014 at 9:04 AM, Jesse Barnum <jsb_tomcat@360works.com>wrote:

> On Feb 7, 2014, at 1:11 PM, Mark Thomas <markt@apache.org> wrote:
>
> >>
> >> This is a single core box (sorry, should have mentioned that in the
> configuration details). Would you still expect increasing the worker thread
> count to help?
> >
> > Yes. I'd return it to the default of 200 and let Tomcat manage the pool.
> > It will increase/decrease the thread pool size as necessary. Depending
> > on how long some clients take to send the data, you might need to
> > increase the thread pool beyond 200.
> >
> > Mark
>
> Unfortunately, this has made the problem worse.
>
> We are now getting site failure messages from our monitoring software more
> frequently, and outside of peak hours, and CPU usage is running much higher
> than normal.
>
> Looking at the manager page shows 76 threads busy out of 200, and YourKit
> shows that many threads (I'm assuming 76-1) are stuck at this point:
>
> > ajp-nio-8009-exec-148 [WAITING] CPU time: 0:50
> > sun.misc.Unsafe.park(boolean, long)
> > java.util.concurrent.locks.LockSupport.parkNanos(Object, long)
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int,
> long)
> >
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
> long)
> > java.util.concurrent.CountDownLatch.await(long, TimeUnit)
> >
> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitLatch(CountDownLatch,
> long, TimeUnit)
> >
> org.apache.tomcat.util.net.NioEndpoint$KeyAttachment.awaitReadLatch(long,
> TimeUnit)
> > org.apache.tomcat.util.net.NioBlockingSelector.read(ByteBuffer,
> NioChannel, long)
> > org.apache.tomcat.util.net.NioSelectorPool.read(ByteBuffer, NioChannel,
> Selector, long, boolean)
> > org.apache.tomcat.util.net.NioSelectorPool.read(ByteBuffer, NioChannel,
> Selector, long)
> > org.apache.coyote.ajp.AjpNioProcessor.readSocket(byte[], int, int,
> boolean)
> > org.apache.coyote.ajp.AjpNioProcessor.read(byte[], int, int, boolean)
> > org.apache.coyote.ajp.AjpNioProcessor.readMessage(AjpMessage, boolean)
> > org.apache.coyote.ajp.AjpNioProcessor.receive()
> > org.apache.coyote.ajp.AbstractAjpProcessor.refillReadBuffer()
> >
> org.apache.coyote.ajp.AbstractAjpProcessor$SocketInputBuffer.doRead(ByteChunk,
> Request)
> > org.apache.coyote.Request.doRead(ByteChunk)
> > org.apache.catalina.connector.InputBuffer.realReadBytes(byte[], int, int)
> > org.apache.tomcat.util.buf.ByteChunk.substract(byte[], int, int)
> > org.apache.catalina.connector.InputBuffer.read(byte[], int, int)
> > org.apache.catalina.connector.CoyoteInputStream.read(byte[])
> > com.prosc.io.IOUtils.writeInputToOutput(InputStream, OutputStream, int)
>
> Almost all requests to the site are POST operations with small payloads.
> My theory, based on this stack trace, is that all threads are in contention
> for the single selector thread to read the contents of the POST, and that
> as the number of worker threads increases, so does thread contention,
> reducing overall throughput. Please let me know whether this sounds
> accurate to you.
>
> If so, how do I solve this? Here are my ideas, but I'm really not familiar
> enough with the connector configurations to know whether I'm on the right
> track or not:
> * Set 'org.apache.tomcat.util.net.NioSelectorShared' property to false. It
> sounds like this would give each worker thread concurrent access to the
> POST requests, although I can't quite tell from the documentation if that's
> true.
> * Re-write my client application to use multiple GET requests instead of
> single POST requests. This would be a lot of work, and seems like it should
> not be necessary.
> * Ditch the NIO connector and Apache/SSL front-end and move to APR/SSL
> with a whole lot of threads. Also seems like it should not be necessary; I
> thought my use case is exactly what NIO is made for.
>
> I'm open to any other ideas, thank you for all of your help!
>
> --Jesse Barnum, President, 360Works
> http://www.360works.com
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message