tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: AW: AW: afs3-rmtsys: connections keept open
Date Thu, 17 Jan 2013 17:38:03 GMT
Hash: SHA256


On 1/17/13 1:49 AM, David Kumar wrote:
> I just checked /var/logs/apache2/error.logs. And found following
> errors:
> [Wed Jan 16 15:14:46 2013] [error] server is within MinSpareThreads
> of MaxClients, consider raising the MaxClients setting [Wed Jan 16
> 15:14:56 2013] [error] server reached MaxClients setting, consider
> raising the MaxClients setting

So you are maxing-out your connections: you are experiencing enough
load that your configuration cannot handle any more connections:
requests are being queued by the TCP/IP stack and some requests may be
rejected entirely depending upon the queue length of the socket.

The first question to ask yourself is whether or not your hardware can
take more than you have it configured to accept. For instance, if your
load average, memory usage, and response time are all reasonable, then
you could probably afford to raise your MaxClients setting in httpd.

Note that the above has almost nothing to do with Tomcat: it only has
to do with Apache httpd.

> Yesterday my problem occurred about the same time.

So, the problem is that Tomcat cannot handle your peak load due to a
file handle limitation. IIRC, your current file handle limit for the
Tomcat process is 4096.

> I'm checking every five minutes how many open files there are:
> count open files started: 01-16-2013_15:10: Count: 775 count open
> files started: 01-16-2013_15:15: Count: 1092

Okay. lsof will help you determine how many of those are "real" files
versus sockets. Limiting socket usage might be somewhat easier
depending upon what your application actually does.

> But maybe the afs3 connection causing the Apache error?

afs3 is a red herring: you are using port 7009 for AJP communication
between httpd and Tomcat and it's being reported as afs3. This has
nothing to do with afs3 unless you know for a fact that your web
application uses that protocol for something. I don't see any evidence
that afs3 is related to your environment in the slightest. I do see
every indication that you are using port 7009 yourself for AJP so
let's assume that's the truth.

Let's recap what your webapp(s) actually do to see if we can't figure
out where all your file handles are being used. I'll assume that each
Tomcat is configured (reasonably) identically, other than port numbers
and such. I'll also assume that you are running the same webapp using
the same (virtually) identical configuration and that nothing
pathological is happening (like one process totally going crazy and
making thousands of socket connections due to an application bug).

First, all processes need access to stdin, stdout, stderr: that's 3
file handles. Plus all shared libraries required to get the process
and JVM started. Plus everything Java needs. Depending on the OS,
that's about 30 or so to begin with. Then, Tomcat uses /dev/random (or
/dev/urandom) plus it needs to load all of its own libraries from JAR
files. There are about 25 of them, and they generally stay open. So,
we're up to about 55 file handles. Don't worry: we won't be counting
these things one-at-a-time for long. Next, Tomcat has two <Connector>s
defined with default connection sizes. At peak load, they will both be
maxed-out at 200 connections each for a total of 402 file handles (1
bind file handle + 200 file handles for the connections * 2
connectors). So, we're up to 457.

Now, onto your web application. You have to count the number of JAR
files that your web application provides: each one of those likely
consumes another file handle that will stay open. Does your webapp use
a database? If so, do you use a connection pool? How big is the
connection pool? Do you have any leaks? If you use a connection pool
and have no leaks, then you can add 'maxActive' file handles to our
running count. If you don't use a connection pool, then you can add
400 file handles to your count, because any incoming request on either
of those two connectors could result in a database connection. (I
highly recommend using a connection pool if you aren't already).

Next, you said this:

> Both of the tomcats are "synchronising" them self. The send some 
> serialized objects via http to each other.

So the webapps make requests to each other? How? Is there a limit to
the number of connections directly from one Tomcat to another? If not,
then you can add another 400 file handles because any incoming
connection could trigger an HTTP connection to the other Tomcat. (What
happens if an incoming client connection causes a connection to the
other Tomcat... will that Tomcat ever call-back to the first one and
set-up a communication storm?).

> And both of them getting some file from SMB shares.

How many files? Every file you open consumes a file handle. If you
close the file, you can reduce your fd footprint, but if you keep lots
of files open...

If you have a dbcp with size=50 and you limit your cross-Tomcat
connections to, say another 50 and your webapp uses 50 JAR files then
you are looking at 600 or so file handles required to run your webapp
under peak load, not including files that must be opened to satisfy a
particular request.

So the question is: where are all your fds going? Use lsof to
determine what they are being used for.

Some suggestions:

1. Consider the number of connections you actually need to be able to
handle: for both connectors. Maybe you don't need 200 possible
connections for your HTTP connector.

2. Make sure your MaxClients in httpd matches make sense with what
you've got in Tomcat's AJP connector: you want to make sure that you
have enough connections available from httpd->Tomcat that you aren't
making users wait. If you're using prefork MPM that means that
MaxClients should be the same as your <Connector>'s maxThreads setting
(or, better yet, use an <Executor>).

3. Use an <Executor>. Right now, you might allocate up to 400 threads
to handle connections from both AJP and HTTP. Maybe you don't need
that. You can share request-processing threads by using an <Executor>
and have both connectors share the same pool.

4. Use a DBCP. Just in case you aren't.

5. Check to see how you are communicating Tomcat-to-Tomcat: you may
have a problem where too many connections are being opened.

6. Check to make sure you don't have any resource leaks: JDBC
connections that aren't closed, files not being closed, etc. etc.
Check to make sure you are closing files that don't need to be open
after they are read.

> But I can't imagine that might be the problem? I'm wondering why
> the tcp connections with state "CLOSE_WAIT" doesn't get closed.

- -chris
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools -
Comment: Using GnuPG with Thunderbird -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message