activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <tb...@alumni.duke.edu>
Subject Re: ActiveMQ Recv-Q buffer size
Date Sat, 30 May 2015 21:51:01 GMT
What else is holding that lock?  (You haven't posted this most recent
thread dump and that stack trace doesn't appear in the first one, so it's
hard to know what's going on without more information.)

Looking at your first thread dump, there are a number of NIO threads trying
to acquire a connection from the pool (is the pool sized large enough?), a
number of NIO threads trying to send messages, and a number of threads
waiting on lock object 0x0000000718009e28 (which includes threads trying to
do DefaultJDBCAdapter.doRecover(),
DefaultJDBCAdapter.doDeleteOldMessages(),
DefautJDBCAdapter.getStoreSequenceId(),
DefaultJDBCAdapter.doRemoveMessage(), and DefaultJDBCAdapter.doAddMessage().

Diving into the DefaultJDBCAdapter lock (
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.activemq/activemq-jdbc-store/5.11.1/org/apache/activemq/store/jdbc/adapter/DefaultJDBCAdapter.java#DefaultJDBCAdapter.0cleanupExclusiveLock),
it looks like doDeleteOldMessages() attempts to acquire the write lock,
which will prevent all of the other threads from continuing until it's
finished.  Since you said previously that the database queries take a long
time (an hour or more?), that would certainly explain unresponsiveness
while that method is running, and since JDBCPersistenceAdapter.doStart() (
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.activemq/activemq-jdbc-store/5.11.1/org/apache/activemq/store/jdbc/JDBCPersistenceAdapter.java#JDBCPersistenceAdapter.doStart%28%29)
calls doDeleteOldMessages() every five minutes by default, if that's very
slow it might explain the poor performance you're seeing.

So here's the real question: why are you using JDBC?  KahaDB is generally
considered to be faster (and more thoroughly optimized) and you seem to be
running into performance problems, so maybe you should be using KahaDB
instead.  Kaha's got its problems (for example, the inability to clean up
journaled files if even a single message in the file can't be deleted), but
it may be a better option for you than JDBC.

Alternatively, you could dig into the inefficient joins that you observed,
figure out how to optimize them (or the software algorithm that needs them
in the first place, whichever), and submit a patch to make the JDBC adapter
better.

Tim

Tim

On Sat, May 30, 2015 at 9:54 AM, Takawale, Pankaj <
pankaj.takawale@dowjones.com> wrote:

> I ran into same situation again. When I reboot database service, and
> activemq service. AMQ starts delivering messages for a while, and it stops
> doing that.
>
> Thread dump is showing all stomp threads are waiting on a lock - not sure
> if it's normal scenario?
>
> I'm using nio for openwire, and stomp+nio for stomp.
>
> Thread dump shows 61613 transport thread is waiting to acquire shared lock.
>
>
> "ActiveMQ Transport: tcp:///10.201.90.222:43684@61613" daemon prio=10
> tid=0x00007f81241a1800 nid=0x433c waiting on condition [0x00007f80a5392000]
>    java.lang.Thread.State: WAITING (parking)
>         at sun.misc.Unsafe.park(Native Method)
>         - parking to wait for  <0x0000000718189928> (a
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync)
>         at
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>         at
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
>         at
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:964)
>         at
>
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1282)
>         at
>
> java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.lock(ReentrantReadWriteLock.java:731)
>         at
>
> org.apache.activemq.broker.region.AbstractRegion.getDestinations(AbstractRegion.java:243)
>         at
>
> org.apache.activemq.broker.region.RegionBroker.getDestinations(RegionBroker.java:158)
>         at
>
> org.apache.activemq.broker.BrokerFilter.getDestinations(BrokerFilter.java:82)
>         at
>
> org.apache.activemq.broker.BrokerFilter.getDestinations(BrokerFilter.java:82)
>         at
>
> org.apache.activemq.broker.BrokerFilter.getDestinations(BrokerFilter.java:82)
>         at
>
> org.apache.activemq.broker.BrokerFilter.getDestinations(BrokerFilter.java:82)
>         at
>
> org.apache.activemq.broker.MutableBrokerFilter.getDestinations(MutableBrokerFilter.java:92)
>         at
>
> org.apache.activemq.broker.region.DestinationFilter.send(DestinationFilter.java:160)
>         at
>
> org.apache.activemq.broker.region.virtual.VirtualTopicInterceptor.send(VirtualTopicInterceptor.java:53)
>
>
> On Sat, May 30, 2015 at 12:40 AM, pankajtakawale <
> pankaj.takawale@gmail.com>
> wrote:
>
> > ActiveMQ becomes unresponsive (large pending messages in JDBC message
> > store)
> > netstat shows Recv-Q buffer sizes piling up for few connections
> >
> >
> >
> > I've around 200 virtual topics. One of the virtual topic has 80 selector
> > aware queues underneath it.
> >
> > Around 200K pending messages across all queues. PostgresSQL as
> persistence
> > store. AMQ & RDS both instances has 15 GB rams each.
> >
> > Attached jstack dump    j.lo
> > <http://activemq.2283324.n4.nabble.com/file/n4697094/j.lo>
> >
> > Any work around or fix?
> >
> > Config snip:
> >
> >                 <policyEntry topic=">" producerFlowControl="false"
> > useCache="false" >
> >                    <dispatchPolicy>
> >                       <roundRobinDispatchPolicy />
> >                     </dispatchPolicy>
> >                   <messageGroupMapFactory>
> >                     <simpleMessageGroupMapFactory/>
> >                   </messageGroupMapFactory>
> >                 </policyEntry>
> >
> >
> >                 <policyEntry queue=">" timeBeforeDispatchStarts="5000"
> > producerFlowControl="false" maxPageSize="1000" useCache="false"
> > expireMessagesPeriod="0" optimizedDispatch="true">
> >                    <dispatchPolicy>
> >                       <roundRobinDispatchPolicy />
> >                     </dispatchPolicy>
> >                   <messageGroupMapFactory>
> >                     <simpleMessageGroupMapFactory/>
> >                   </messageGroupMapFactory>
> >                   <pendingMessageLimitStrategy>
> >                     <constantPendingMessageLimitStrategy limit="-1"/>
> >                   </pendingMessageLimitStrategy>
> >                   <pendingQueuePolicy>
> >                         <fileQueueCursor />
> >                   </pendingQueuePolicy>
> >                 </policyEntry>
> >
> >
> >           <systemUsage>
> >             <systemUsage sendFailIfNoSpaceAfterTimeout="10000">
> >                 <memoryUsage>
> >                     <memoryUsage percentOfJvmHeap="70" />
> >                 </memoryUsage>
> >                 <storeUsage>
> >                     <storeUsage limit="90 gb"/>
> >                 </storeUsage>
> >                 <tempUsage>
> >                     <tempUsage limit="10 gb"/>
> >                 </tempUsage>
> >             </systemUsage>
> >         </systemUsage>
> >
> >
> >
> >
> > --
> > View this message in context:
> >
> http://activemq.2283324.n4.nabble.com/ActiveMQ-Recv-Q-buffer-size-tp4697094.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message