tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Handling requests when under load - ACCEPT and RST vs non-ACCEPT
Date Fri, 09 Nov 2012 19:41:26 GMT
Hash: SHA1


On 11/7/12 10:03 PM, Esmond Pitt wrote:
> Asankha
> I haven't said a word about your second program, that closes the
> listening socket. *Of course* that causes connection refusals, it
> can't possibly not, but it isn't relevant to the misconceptions
> about what OP_ACCEPT does that you have been expressing here and
> that I have been addressing.
> Closing the listening socket, as you seem to be now suggesting, is
> a very poor idea indeed: what happens if some other process grabs
> the port in the meantime: what is Tomcat supposed to do then?


This is the TCP/IP equivalent of a busy-wait:

   ; // Check again!

Imagine the quite likely case where "high load" isn't just a single
connection over the high-water mark that you are encountering. Let's say:

 active request processors = 100
 backlog = 100

This means that 200 simultaneous connection can get ... somewhat
well-defined behavior. Everyone else gets weirdness. Let's accept that
for the time being.

Let's talk about 1000 simultaneous clients pounding on this service:
the 200 lucky winners essentially get connections, all others get
weirdness but will likely reconnect a short time later.

If you just use the IP stack's backlog, then the queue gets processed
by the OS: the Java code is super-simple (just accept() and wait) and
incoming connections are essentially buffered by the TCP/IP stack's
backlog. Basically, your application serves requests as fast as it and
the OS can allow.

Instead, if you unbind and re-bind the port, you not only run the risk
of losing your port (which I'll admit is fairly far-fetched, but it
certainly could happen) then you are potentially dropping 100
connections immediately from the backlog (what kind of experience to
*those* clients get), processing 1 connection through completion
(there are 99 others still running), re-binding, accepting a single
connection into the application plus 100 others into the backlog, then
choking again and dropping 100 connections, then processing another
single connection. That's a huge waste of time unbinding and
re-binding to the port, killing the backlog over and over again... and
all for 1-connection-at-a-time pumping. Insanity.

You want to add all this extra complexity to the code and, IMO, shitty
handling of your incoming connections just so you can say "well,
you're getting 'connection refused' instead of hanging... isn't that
better?". I assert that it is *not* better. Clients can set TCP
handshake timeouts and survive. Your server will perform much better
without all this foolishness.

I have yet to see any performance data but I suspect that throughput
would go down substantially if this idea were to be implemented.

- -chris
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools -
Comment: Using GnuPG with Mozilla -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message