activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Huntwork" <...@huntwork.net>
Subject how many connections?
Date Fri, 16 Jun 2006 19:40:49 GMT
we're experiencing an OOM with heavy CPU usage at startup on an
activemq 4.0final release broker.  we're connecting with activemq
4.0 final release client.  We're using jencks message driven pojo's and
jencks outbound message pooling.  Our config looks like this:
[...]
  <bean id="jmsBootstrapContext" class="
org.jencks.factory.BootstrapContextFactoryBean">
      <property name="threadPoolSize" value="10" />
  </bean>
[....]
<bean id="jmsPoolingSupport" class="org.jencks.factory.SinglePoolFactoryBean
">
    <property name="maxSize">
      <value>30</value>
    </property>
    <property name="minSize">
      <value>5</value>
    </property>
    <property name="blockingTimeoutMilliseconds">
      <value>500</value>
    </property>
[...]

We were seeing this OOM consistently this morning after the broker started
failing for unknown reasons and we restarted it.  We're not sending very
many messages (6/second/process), but we're connecting to the broker with a
lot of processes.  We have 22 processes connecting, all running the
configuration above.  That's potentially 880 simultaneous connections to the
broker, but more realistically maybe 330.  Is this likely to be a problem?
Do other people typically connect this many clients to a broker?

I ran kill -3 while the broker was OOM and consuming the CPU.  there are
lots of threads that look like this:

"ActiveMQ Transport: tcp:///10.144.71.38:53513" daemon prio=1 tid=0x5e017188
nid
=0x2a2b waiting for monitor entry [0x52788000..0x52788780]
        at
edu.emory.mathcs.backport.java.util.concurrent.CopyOnWriteArrayList.r
emove(CopyOnWriteArrayList.java:165)
        - waiting to lock <0x6335cdd0> (a
edu.emory.mathcs.backport.java.util.co
ncurrent.CopyOnWriteArrayList)
        at org.apache.activemq.broker.TransportConnector.onStopped
(TransportConn
ector.java:290)
        at org.apache.activemq.broker.TransportConnection.stop(TransportConnecti
on.java:82)
        at org.apache.activemq.util.ServiceSupport.dispose(
ServiceSupport.java:3
9)
        at
org.apache.activemq.broker.AbstractConnection.serviceTransportExcepti
on(AbstractConnection.java :172)
        at org.apache.activemq.broker.TransportConnection$1.onException
(Transpor
tConnection.java:68)
        at org.apache.activemq.transport.TransportFilter.onException
(TransportFi
lter.java:94)
        at org.apache.activemq.transport.ResponseCorrelator.onException
(Response
Correlator.java:120)
        at org.apache.activemq.transport.TransportFilter.onException
(TransportFi
lter.java:94)
        at org.apache.activemq.transport.TransportFilter.onException(TransportFi
lter.java:94)

I also ran jmap.  Here's the top of the histogram:

Object Histogram:

Size    Count   Class description
-------------------------------------------------------
923908224       14096   org.apache.activemq.command.DataStructure[]
180613424       68212   byte[]
8106048 253314
edu.emory.mathcs.backport.java.util.concurrent.ConcurrentHashMap
$Segment
6926992 80479   char[]
4461856 58151   java.util.HashMap$Entry []
4063768 253312
edu.emory.mathcs.backport.java.util.concurrent.ConcurrentHashMap
$HashEntry[]
4053168 253323
edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLo
ck$NonfairSync
3981208 36371   * ConstMethodKlass
3012240 125510  java.util.HashMap$Entry
2622552 36371   * MethodKlass
2325400 58135   java.util.HashMap
2119864 36717   java.lang.Object[]
2076464 48139   * SymbolKlass
1680408 70017   java.lang.String
1596296 2893    * ConstantPoolKlass
1266560 15832
edu.emory.mathcs.backport.java.util.concurrent.ConcurrentHashMap
$Segment[]
1195160 2893    * InstanceKlassKlass
1045808 2581    * ConstantPoolCacheKlass
934208  29194   org.apache.activemq.filter.DestinationMapNode
902224  56389
edu.emory.mathcs.backport.java.util.concurrent.atomic.AtomicBool


Any ideas?  This is becoming a very critical issue for us and while we're
going to continue using activemq 4.0 for our release next week, we may get
paged a whole lot and have to wake up in the middle of the night to restart
if this issue continues to occur.

Thanks for any help.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message