tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From gnath <gautam_exquis...@yahoo.com>
Subject Re: Tomcat 6.0.35-SocketException: Too many open files issue with
Date Sun, 22 Jan 2012 23:18:12 GMT
Thanks chris for looking into this.

Here are answers for the questions you asked.

We have 2 connectors (one for http and another for https) using the tomcatThreadPool. I have
the connectionTimeout="20000" for http connector.  However i was told that our https connector
might not be used by the app as our loadbalancer is handling all the https traffic and just
sending them to http connector.

the ulimit settings were increased from default 1024 to 4096 by our admin. not sure how he
did that, but i see the count as 4096 when i do ulimit -a.

for ulimit -n i see its 'unlimited'.

for cat /proc/PID/limits, i get the following response:

Limit                     Soft Limit           Hard Limit          
Units     
Max cpu time              unlimited            unlimited           
seconds   
Max file size             unlimited            unlimited           
bytes     
Max data size             unlimited            unlimited           
bytes     
Max stack size            10485760             unlimited           
bytes     
Max core file size        0                    unlimited           
bytes     
Max resident set          unlimited            unlimited           
bytes     
Max processes             unlimited            unlimited           
processes 
Max open files            4096                 4096                
files     
Max locked memory         32768                32768               
bytes     
Max address space         unlimited            unlimited           
bytes     
Max file locks            unlimited            unlimited           
locks     
Max pending signals       202752               202752              
signals   
Max msgqueue size         819200               819200              
bytes     
Max nice priority         0                    0                   

Max realtime priority     0                    0 



This morning Tomcat hung again but this time it dint say 'too many open files' in logs but
i only see this below in catalina.out:

org.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of [null] ignored
Jorg.apache.tomcat.util.http.Parameters processParameters
INFO: Invalid chunk starting at byte [0] and ending at byte [0] with a value of [null] ignored

When it hung(java process is still up), i ran few commands like lsof by PID and couple others.
here is what i got:

lsof -p PID| wc -l
1342

lsof | wc -l
4520

lsof -u USER| wc -l
1953

After i kill java process the lsof for pid returned obviously to zero


Is there any chance that the tomcat is ignoring the ulimit?, some people on web were saying
something about setting this in catalina.sh.

Please help with my ongoing issue.. its getting very hard to monitor the logs every minute
and restarting whenever it hangs with these kind of issues. I very much appreciate your help
in this.

Thanks
-G



________________________________
 From: Christopher Schultz <chris@christopherschultz.net>
To: Tomcat Users List <users@tomcat.apache.org> 
Sent: Sunday, January 22, 2012 11:20 AM
Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
 
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

G,

On 1/22/12 3:01 AM, gnath wrote:
> We have been seeing "SocketException: Too many open files" in 
> production environment(Linux OS running Tomcat 6.0.35 with sun's
> JDK 1.6.30) every day and requires a restart of Tomcat. When this
> happened for the first time, we searched online and found people
> suggesting to increase the file descriptors size and we increased
> to 4096. But still the problem persists. We have the Orion App
> Server also running on the same machine but usually during the day
> when we check the open file descriptor by command: ls -l
> /proc/PID/fd, its always less than 1000 combined for both Orion and
> Tomcat.
> 
> Here is the exception we see pouring in the logs once it starts: 
> This requires us to kill java process and restart tomcat. Our
> Tomcat configuration maxThreadCount is 500 with minSpareThreads=50
> in server.xml

How many connectors do you have? If you have more than one connector
with 500 threads, then you can have more threads than maybe you are
expecting.

> SEVERE: Socket accept failed java.net.SocketException: Too many
> open files at java.net.PlainSocketImpl.socketAccept(Native Method) 
> at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:408) at
> java.net.ServerSocket.implAccept(ServerSocket.java:462) at
> java.net.ServerSocket.accept(ServerSocket.java:430) at
> org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:61)
>
> 
at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:352)
> at java.lang.Thread.run(Thread.java:662)
> 
> ulimit -a gives for the user where Tomcat is running.
> 
> open files                      (-n) 4096

How did you set the ulimit for this user? Did you do it in a login
script or something, or just at the command-line at some point?

How about (-u) max user processes or threads-per-process or anything
like that?

Sometimes the "Too many files open" is not entirely accurate.

What does 'cat /proc/PID/limits' show you?

- -chris
-----BEGIN PGP SIGNATURE-----
Version: GnuPG/MacGPG2 v2.0.17 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8cYZMACgkQ9CaO5/Lv0PC7+ACeMW3/jwhOUKB9RZ3u+dfN85jD
NnMAoLU7QJ6DXKaI9Q/mPeEO6x9gXzx6
=Nd1d
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message