tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Christopher Schultz <>
Subject Re: Tomcat 6.0.35-SocketException: Too many open files issue with
Date Mon, 23 Jan 2012 15:51:44 GMT
Hash: SHA1


On 1/22/12 6:18 PM, gnath wrote:
> We have 2 connectors (one for http and another for https) using
> the tomcatThreadPool. I have the connectionTimeout="20000" for
> http connector.  However i was told that our https connector might
> not be used by the app as our loadbalancer is handling all the
> https traffic and just sending them to http connector.

You might want to disable that HTTPS connector, but it's probably not
hurting you at all in this case -- just a bit of wasted resources. If
you are sharing a thread pool then there is no negative impact on the
number of threads and/or open files that you have to deal with, here.

> the ulimit settings were increased from default 1024 to 4096 by
> our admin. not sure how he did that, but i see the count as 4096
> when i do ulimit -a.

Well, if your admin says it's right, I suppose it's right.

> for ulimit -n i see its 'unlimited'.

That's good.

> for cat /proc/PID/limits, i get the following response:
> Limit                     Soft Limit           Hard Limit
> Units Max cpu time              unlimited            unlimited
> seconds Max file size             unlimited            unlimited
> bytes Max data size             unlimited            unlimited
> bytes Max stack size            10485760             unlimited
> bytes Max core file size        0                    unlimited
> bytes Max resident set          unlimited            unlimited
> bytes Max processes             unlimited            unlimited
> processes Max open files            4096                 4096
> files Max locked memory         32768                32768
> bytes Max address space         unlimited            unlimited
> bytes Max file locks            unlimited            unlimited
> locks Max pending signals       202752               202752
> signals Max msgqueue size         819200               819200
> bytes Max nice priority         0                    0
>  Max realtime priority     0                    0

Those all look good to me.

> This morning Tomcat hung again but this time it dint say 'too many 
> open files' in logs but i only see this below in catalina.out:
> org.apache.tomcat.util.http.Parameters processParameters INFO:
> Invalid chunk starting at byte [0] and ending at byte [0] with a
> value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
> processParameters INFO: Invalid chunk starting at byte [0] and
> ending at byte [0] with a value of [null] ignored


> When it hung(java process is still up), i ran few commands like
> lsof by PID and couple others.

Next time, take a thread dump as well. The fact that Tomcat hung up
without an OS problem (like Too Many Open Files) is probably not good.
If this happens again with an apparent hang with no stack traces in
the logs, take a thread dump and post it back here under a different

> here is what i got:
> lsof -p PID| wc -l 1342
> lsof | wc -l 4520
> lsof -u USER| wc -l 1953

Hmm I wonder if you are hitting a *user* or even *system* limit of
some kind (though a *NIX system with a hard limit of ~4500 file
descriptors seems entirely unreasonable). I also wonder how many
/processes/ and/or /threads/ you have running at once.

> After i kill java process the lsof for pid returned obviously to
> zero

Of course.

> Is there any chance that the tomcat is ignoring the ulimit?

Those limits are not self-imposed: the OS imposes those limits. Tomcat
doesn't even know it's own ulimit (of any kind), so it will simply
consume whatever resources you have configured it to use, and if it
hits a limit, the JVM will experience some kind of OS-related error.

> , some people on web were saying something about setting this in

Setting what? ulimit? I'd do it in because that's a more
appropriate place for that kind of thing. I'm also interested in what
the Internet has to say about what setting(s) to use.

> Please help with my ongoing issue.. its getting very hard to
> monitor the logs every minute and restarting whenever it hangs with
> these kind of issues. I very much appreciate your help in this.

Did this just start happening recently? Perhaps with an upgrade of
some component?

If you think this might actually be related to the number of file
handles being used by your thread pool, you might want to reduce the
maximum number of threads for that thread pool: a slightly less
responsive site is better than one that goes down all the time because
of hard resource limits.

- -chris
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools -
Comment: Using GnuPG with Mozilla -


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message