tomcat-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gerard van Enk <>
Subject Re: Tomcat 4.0.1 Performance tuning problems
Date Fri, 08 Feb 2002 20:56:37 GMT
Remy Maucherat wrote:
>>We're having some troubles with our Tomcat4.0.1. I'm not sure if there's
>>a problem in the configuration or if it's something else.
>>First the configuration:
>>Sun-Fire-280R/Solaris 8
>>1024MB Memory
>>Sun jdk1.3.1_01 with 512MB for the jvm
>>and the following configuration for the proxy-connector:
>><Connector className="org.apache.catalina.connector.http.HttpConnector"
>>                port="8081" minProcessors="100" maxProcessors="150"
>>                enableLookups="false"
>>                acceptCount="20" debug="0" connectionTimeout="15000"
>>                proxyPort="80"/>
>>The server is getting about 300000 requests (hits) a day. A while after
>>starting the server we're getting the following message:
>>2002-02-07 21:51:29 HttpConnector[8081] No processor available,
>>rejecting this connection
>>It looks like the connections aren't freed after being used. But I'm not
>>sure how I can test it.
>>Allmost every page contains database-queries, but we're using heavy
>>caching (in memory and oscaches for the jsp-pages). What can I do,
>>increase the maxprocessors? But how far can I go, I did some tests, but
>>it looks like  more than 150 isn't possible. How many memory do I need
>>per processor?
>>If somebody could give some hints.....
> Bug 5735 ( is
> currently the "most wanted" Tomcat bug. Unfortunately, it's very hard to
> debug or find in which component the bug is, so if you or anyone else can
> help, that would be great.
> The bug could be either:
> - in the connector code, or in the networking code
> - in the thread pooling code (in which case the other connectors, like JK,
> could be affected since they reuse the code)
> You can try the HTTP/1.0 connector (it's commented out in the default
> configuration), and see if there are still problems. If the problems are
> gone, then it's likely the cause is the first item (although I have no idea
> what it could be at this point).

I tried this (not in a live-situation, but with ab), but I don't see any 
difference. I can't reproduce the situation with ab. But it's still 
happening in the live-server. Could it be 150 processors isn't enough in 
our situation?

I would love the help you on this one, but I find it very hard to 
reproduce it :(

I'll do some more testing and keep you informed.


To unsubscribe:   <>
For additional commands: <>
Troubles with the list: <>

View raw message