tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Connection count explosion due to thread http-nio-80-ClientPoller-x death
Date Thu, 26 Jun 2014 22:59:51 GMT
Hash: SHA256


On 6/26/14, 9:56 AM, Lars Engholm Johansen wrote:
> Thanks for all the replies guys.
> Have you observed a performance increase by setting
>> acceptorThreadCount to 4 instead of a lower number? I'm just
>> curious.
> No, but this was the consensus after elongated discussions in my
> team. We have 12 cpu cores - better save than sorry. I know that
> the official docs reads "although you would never really need more
> than 2" :-)

Okay. You might want to do some actual benchmarking. You may find that
more contention for an exclusive lock actually /decreases/ performance.

>> The GC that Andre suggested was to get rid of some of CLOSE_WAIT 
>> connections in netstat output, in case if those are owned by
>> some abandoned and non properly closed I/O classes that are still
>> present in JVM memory.
> Please check out the "open connections" graph at
> As far as I interpret, we only have a
> slight connection count growth during the days until the poller
> thread die. These may or may not disappear by forcing a GC, but the
> amount is not problematic until we hit the 
> http-nio-80-ClientPoller-x thread death.

Like I said, when the poller thread(s) die, you are totally screwed.

> The insidious part is that everything may look fine for a long time
> (apart
>> from an occasional long list of CLOSE_WAIT connections).  A GC
>> will happen from time to time (*), which will get rid of these
>> connections.  And those CLOSE_WAIT connections do not consume a
>> lot of resources, so you'll never notice. Until at some point,
>> the number of these CLOSE_WAIT connections gets just at the point
>> where the OS can't swallow any more of them, and then you have a
>> big problem. (*) and this is the "insidious squared" part : the
>> smaller the Heap, the more often a GC will happen, so the sooner
>> these CLOSE_WAIT connections will disappear.  Conversely, by
>> increasing the Heap size, you leave more time between GCs, and
>> make the problem more likely to happen.
> You are correct. The bigger the Heap size the rarer a GC will
> happen - and we have set aside 32GiB of ram. But again, referring
> to my "connection count" graph, a missing close in the code does
> not seem to be the culprit.
> A critical error (java.lang.ThreadDeath,
>> java.lang.VirtualMachineError) will cause death of a thread. A
>> subtype of the latter is java.lang.OutOfMemoryError.
> I just realized that StackOverflowError is also a subclass of 
> VirtualMachineError, and remembered that we due to company
> historical reasons had configured the JVM stack size to 256KiB
> (down from the default 1GiB on 64 bit machines). This was to
> support a huge number of threads on limited memory in the past. I
> have now removed the -Xss jvm parameter and are exited if this
> solves our poller thread problems. Thanks for the hint,
> Konstantin.

Definitely let us know. A StackOverflowError should be relatively
rare, but if you have set your stack size to something very low, this
can happen.

Remember, since you are using the NIO connector, you don't need a huge
number of threads to support a huge number of connections. The stack
size relates to the number of threads you want to have active.

It looks like you haven't set the number of request-processor threads,
so you'll get the default value of 200.

The default stack size for Oracle's HotSpot JVM is 1MiB, not 1GiB.
200MiB in a 64-bit heap shouldn't be too much for your JVM, and will
hopefully cut-down on your stack problems.

Do you have a lot of recursive algorithms, or anything that calls
very-deep stacks?

- -chris
Version: GnuPG v1
Comment: GPGTools -
Comment: Using GnuPG with Thunderbird -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message