tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Thomas <>
Subject Re: attempting to achieve 100K concurrent websocket connections on Tomcat 7.0.48 NIO
Date Fri, 01 Nov 2013 23:15:57 GMT
On 01/11/2013 22:00, Bob DeRemer wrote:
> QUESTION: I'm looking for some advice on what Tomcat NIO connector
> settings to use to support 100K concurrent websocket connections.
> Hopefully I  can reach this goal through a combination of Tomcat NIO
> Connector settings, and Server 2008 R2 configuration [if needed].
> BACKGROUND: We're scale testing our websocket application and looking
> to see how many concurrent websocket connections we can get on a
> single Tomcat instance - with the goal being 100K.  I've provided the
> test landscape details at the bottom - all VERY BIG EC2 instances
> over 10 GB network, so memory, CPU and network do not appear to be
> the problem when monitoring.
> PROBLEM: The problem we are running into is that we can't seem to
> establish even 50K connections into Tomcat.  At some point, we start
> getting connect failures, similar to the following:

OK. Win 2k8 R2, 16GB RAM, bunch of other stuff running, client and
server on same machine, no tuning. I get 16,313 connections before it
falls over. That is consistent with the defaults which should allow a
maximum of 16384 (given I have other stuff running).

(I'll commit the test to trunk shortly).

netsh int ipv4 set dynamicport tcp start=10000 num=55536

I got as far as 25121 before I hit GC issues.

-Xmx12G -Xms12G fixed the GC problems (well, I say fixed allowed them to
be ignored would be closer)

The next run got to 55464 connections which looks to be about the limit
of the ephemeral ports.

> I'm hoping that someone may be able to advise what changes we might
> make to the following Tomcat NIO connector setting that will allow
> upwards of 100K websocket connections:
> <Connector port="80" 
> protocol="org.apache.coyote.http11.Http11NioProtocol" 
> connectionTimeout="20000" maxConnections="100000" 
> maxThreads="100000" redirectPort="8443" />

As Chris said the threads are probably too high.

You can set maxThreads="-1" for unlimited connections (worth doing for

> The test landscape is all Windows Server 2008 R2 boxes running in EC2
> and the Test Client environment:
> Our test client is a multi-threaded java client that makes use of the
> JSR356 ClientEndpoint functionality.  We're creating 40K+ websocket
> connections from a single test client machine.  The test client is
> Server 2008 R2 and we have configured it to allow 50K ephemeral
> ports, so we should be able to establish 40K+ outbound websocket
> connections.
> Server environment:
> *         EC2 instance (cc2.8xlarge) (60 GB, 10 GB network, 16
> vCPUs)
> *         Server 2008 R2
> *         Tomcat 7.0.48 (trunk)
> *         Java 1.7.0_45

Look at your memory usage. My guess based on my simple test results is
that the JVM for both Tomcat and the client isn't grabbing as much
memory as it really needs. If that is the problem then -Xms20G -Xmx30G
should do the trick.

Keep in mind that Tomcat is allocating at several 8k buffers for each
connection (client and server - I'd need to check the code to be sure
how many buffers are allocated per connection) so you are going to need
a fair amount of RAM. The TCP buffers will need quite a lot of space too
but that should be outside of the Java Object Heap.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message