tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Monitoring Connections
Date Wed, 21 Oct 2015 18:58:07 GMT

On 10/21/15 2:37 PM, Jamie Jackson wrote:
> On Wed, Oct 21, 2015 at 1:03 PM, Christopher Schultz  wrote:
>> Jamie,
>> Your mostly-default <Connector> will default to a maximum of 200
>> incoming connections with 200 threads to handle them. You are only using
>> 12, so something else must be going on. You have no obvious limits on
>> httpd, so you are probably using the default there as well
>> (coincidentally, also in the 200-connection range).
>> That's a high connection timeout: 93 seconds (why 93?). Note that the
>> connectionTimeout sets the amount of time Tomcat will wait for a client
>> to send the request line (the "GET /foo HTTP/1.1"), not the amount of
>> time the request is allowed to run -- like for an upload, etc. I usually
>> lower this setting from the default of 60 seconds to more like 5 or 10
>> seconds. Clients shouldn't be waiting a long time between making a
>> connection and sending a request.
>> This timeout also applies to subsequent requests on a keep-alive
>> connection. So if the browser opens a connection and sends 1, 2, 3
>> requests, Tomcat will hold that thread+connection open for 93 seconds
>> after the last request (assuming the client doesn't terminate the
>> connection, which it might NOT) before allowing other clients to be
>> serviced by that thread. This is a BIO-Connector-only behavior. The
>> NIO/NIO2 and APR connectors don't hold-up the request thread waiting for
>> a follow-up keep-alive request from a client.
> Thanks for the info. It seems as if connectionTimeout is almost universally
> misunderstood to mean something like "request timeout," (which is why it
> had been high--to accommodate things like long responses and file uploads).
> It seems possible that we could be using up too many threads for too long
> because of the effect of this long timeout on keep-alives.

While that's true, you should something like 185 threads "in reserve"
and so the server shouldn't grind to a halt and not let anyone else in.
If there are other components in the mix, those could prevent more
connections (e.g. load-balancer, QOS component, etc.) or even if you are
trying to connect from a single web browser with a 4-connection limit,
you'll obviously only be able to upload 4 files at a time.

But you didn't say anything about that kind of thing, so I assume it's
not the issue.

> The only time I can think of that a client would be taking any kind of time
> between connection and sending the request URI line is if someone is
> manually interacting (say, via telnet). I'm going to follow your lead and
> reduce this.

I wouldn't reduce it past the default of 60 seconds (60000ms) unless you
are observing client-starvation.

> I doubt that this is the *sole* culprit, but it *is* something for me to
> tweak.

I would read the whole HTTP-Connector configuration reference --
especially the "timeout" related items -- and make sure you understand
them all before setting any of them. The defaults are reasonable, but
every environment has its own special set of requirements.

I don't think the timeouts are the issue. What else can you tell us
about the behavior of the server when it "crashes"? I don't think you
have really described the actual problem, yet.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message