tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Pid <...@pidster.com>
Subject Re: Tomcat 6.0.35-SocketException: Too many open files issue with
Date Thu, 26 Jan 2012 08:38:51 GMT
On 26/01/2012 04:53, gnath wrote:
> Hi Chris, 
> 
> Thanks a lot for looking into this and giving answers for all my questions. Sorry, i
could not get chance to reply in time. As you have suggested, i started collecting the thread
dumps when it happened again and we saw some kind of DBCP Connection pool issues leading to
'Too Many open files' issue. So we decided to replace the commons DBCP with tomcat-jdbc.jar
(with same configuration properties). After this change, it seemed for few hours but started
seeing in the logs where the Connection Pool jar could not give any connections and seems
to be all the connections are busy. So we went ahead and added a configuration property 'removeAbandoned=true'
in our Datasource configuration. 
> 
> 
> We are still watching the performance and the server behavior after these changes. 
> Will keep you posted on how things will turn out or if i see any further issues. 
> 
> 
> thank you once again, I really appreciate your help.
> 
> Thanks

This sounds increasingly like your application isn't returning
connections to the pool properly.  Switching pool implementation won't
help if this is the case.

You should carefully examine the code where the database is used to
ensure that DB resources are returned to the pool in a finally block,
after use.

Chris's question regarding 'what has changed' is still relevant.


p

> ________________________________
>  From: Christopher Schultz <chris@christopherschultz.net>
> To: Tomcat Users List <users@tomcat.apache.org> 
> Sent: Monday, January 23, 2012 7:51 AM
> Subject: Re: Tomcat 6.0.35-SocketException: Too many open files  issue with
>  
> G,
> 
> On 1/22/12 6:18 PM, gnath wrote:
>> We have 2 connectors (one for http and another for https) using
>> the tomcatThreadPool. I have the connectionTimeout="20000" for
>> http connector.  However i was told that our https connector might
>> not be used by the app as our loadbalancer is handling all the
>> https traffic and just sending them to http connector.
> 
> You might want to disable that HTTPS connector, but it's probably not
> hurting you at all in this case -- just a bit of wasted resources. If
> you are sharing a thread pool then there is no negative impact on the
> number of threads and/or open files that you have to deal with, here.
> 
>> the ulimit settings were increased from default 1024 to 4096 by
>> our admin. not sure how he did that, but i see the count as 4096
>> when i do ulimit -a.
> 
> Well, if your admin says it's right, I suppose it's right.
> 
>> for ulimit -n i see its 'unlimited'.
> 
> That's good.
> 
>> for cat /proc/PID/limits, i get the following response:
> 
>> Limit                     Soft Limit           Hard Limit
>> Units Max cpu time              unlimited            unlimited
>> seconds Max file size             unlimited            unlimited
>> bytes Max data size             unlimited            unlimited
>> bytes Max stack size            10485760             unlimited
>> bytes Max core file size        0                    unlimited
>> bytes Max resident set          unlimited            unlimited
>> bytes Max processes             unlimited            unlimited
>> processes Max open files            4096                 4096
>> files Max locked memory         32768                32768
>> bytes Max address space         unlimited            unlimited
>> bytes Max file locks            unlimited            unlimited
>> locks Max pending signals       202752               202752
>> signals Max msgqueue size         819200               819200
>> bytes Max nice priority         0                    0
>>   Max realtime priority     0                    0
> 
> Those all look good to me.
> 
>> This morning Tomcat hung again but this time it dint say 'too many 
>> open files' in logs but i only see this below in catalina.out:
> 
>> org.apache.tomcat.util.http.Parameters processParameters INFO:
>> Invalid chunk starting at byte [0] and ending at byte [0] with a
>> value of [null] ignored Jorg.apache.tomcat.util.http.Parameters
>> processParameters INFO: Invalid chunk starting at byte [0] and
>> ending at byte [0] with a value of [null] ignored
> 
> Hmm...
> 
>> When it hung(java process is still up), i ran few commands like
>> lsof by PID and couple others.
> 
> Next time, take a thread dump as well. The fact that Tomcat hung up
> without an OS problem (like Too Many Open Files) is probably not good.
> If this happens again with an apparent hang with no stack traces in
> the logs, take a thread dump and post it back here under a different
> subject.
> 
>> here is what i got:
> 
>> lsof -p PID| wc -l 1342
> 
>> lsof | wc -l 4520
> 
>> lsof -u USER| wc -l 1953
> 
> Hmm I wonder if you are hitting a *user* or even *system* limit of
> some kind (though a *NIX system with a hard limit of ~4500 file
> descriptors seems entirely unreasonable). I also wonder how many
> /processes/ and/or /threads/ you have running at once.
> 
>> After i kill java process the lsof for pid returned obviously to
>> zero
> 
> Of course.
> 
>> Is there any chance that the tomcat is ignoring the ulimit?
> 
> Those limits are not self-imposed: the OS imposes those limits. Tomcat
> doesn't even know it's own ulimit (of any kind), so it will simply
> consume whatever resources you have configured it to use, and if it
> hits a limit, the JVM will experience some kind of OS-related error.
> 
>> , some people on web were saying something about setting this in
>> catalina.sh.
> 
> Setting what? ulimit? I'd do it in setenv.sh because that's a more
> appropriate place for that kind of thing. I'm also interested in what
> the Internet has to say about what setting(s) to use.
> 
>> Please help with my ongoing issue.. its getting very hard to
>> monitor the logs every minute and restarting whenever it hangs with
>> these kind of issues. I very much appreciate your help in this.
> 
> Did this just start happening recently? Perhaps with an upgrade of
> some component?
> 
> If you think this might actually be related to the number of file
> handles being used by your thread pool, you might want to reduce the
> maximum number of threads for that thread pool: a slightly less
> responsive site is better than one that goes down all the time because
> of hard resource limits.
> 
> -chris
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

-- 

[key:62590808]


Mime
View raw message