activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <>
Subject Re: ** JMS Client HANGING - AMQ 5.9, AIX 6.1
Date Fri, 26 Jun 2015 14:01:02 GMT
The stack trace you quoted is irrelevant; it's just executors waiting to be
given work to do.  There are also lots of threads trying to read messages
from sockets in
org/apache/activemq/transport/tcp/TcpBufferedInputStream.fill() or waiting
for a message to be available during a call to
org/apache/activemq/SimplePriorityMessageDispatchChannel.dequeue(); both of
those are also irrelevant, because they're just ActiveMQ waiting to be
given work.

There are two threads waiting for responses to synchronous sends in
org/apache/activemq/ActiveMQConnection.syncSendPacket().  Those might
simply be victims of the inability to read messages, or they might be
relevant to what's going on; it's hard to tell from what you've sent.  One
thing I'd check based on them (and one thing I'd always check in general,
so hopefully you've already done this) is whether there are any errors in
the ActiveMQ broker logs, and specifically whether there are any messages
about producer flow control kicking in.  Depending on how PFC is
configured, I believe I've seen at least one JIRA or wiki page describing
the potential for PFC to cause deadlock when synchronous sends are used by
preventing the acks from being read.  If you see PFC-related lines in the
broker logs, we'll go from there; if not, then don't worry about this.

My overall thought, however, is that ActiveMQ (and the Spring JMS library
you're using) on its own isn't likely to run your client out of memory
unless your messages are VERY large, because there are limits on how many
messages will be transferred to your client at any one time.  Plus this
code has been run by LOTS of people over the years; if it caused OOMs on
its own, the cause would almost certainly have already been found.  So it's
most likely that this behavior is caused by something your own code is
doing, and the most likely guess is that you may be wrongly holding a
reference to objects that could otherwise be GCed, increasing heap memory
over time till you eventually run out.  You'll probably want to use tools
such as JVisualVM to analyze your memory usage and figure out what objects
are the ones causing it to grow and what's holding a reference to them.

One other possibility is that your algorithm is correct, but processing
each message is memory-intensive (using over half the heap in total across
however many messages you're processing in parallel) and so lots of objects
are getting forced into Old Gen even though they're actually short-lived
objects, and they are only getting removed from Old Gen via full GCs.  I
think this is far less likely than the other things I've described, but if
it's the problem, you could 1) increase the JVM's heap size if possible, 2)
tweak the percentages allocated to Old Gen and Young Gen to give more to
Young Gen in the hopes that more things will stay in Young Gen for longer,
or 3) look into other GC strategies (I'd recommend G1GC, but you appear to
be on the IBM JVM and I've never used it or researched it so I don't know
what GC strategies it offers).  But I think you'd really want to prove to
yourself that this is your problem (i.e. that none of the other things I've
mentioned are) before you go down this path, because throwing more memory
at a memory leak doesn't fix it, it just delays it and makes it harder to


On Fri, Jun 26, 2015 at 1:53 AM, cdelgado <>

> Hi all,
> We're facing an issues that is stopping us for going to production, this is
> a huge blocker for us.
> The problem is that one of our consumers is hanging (randomly, aparently)
> and stops consuming messages. From a JMX we can see that is consuming
> memory
> and performing quite a lot full GCs.
> I'm attaching a javacore dump generated sending a kill -3 to the process.
> There you can see all the details and thread statuses.
> javacore.txt
> <>
> Basically, we have 90.7% of the threads waiting on condition, 3.5% Parked
> and 5.7% Running.
> The Parked threads have different stacktraces, but generally they end in
> the
> same block:
> *at sun/misc/Unsafe.park(Native Method)
> at
> java/util/concurrent/locks/LockSupport.parkNanos(
> Code))
> at
> java/util/concurrent/SynchronousQueue$TransferStack.awaitFulfill(
> Code)) *
> at
> java/util/concurrent/SynchronousQueue$TransferStack.transfer(
> Code))
> at
> java/util/concurrent/SynchronousQueue.poll(
> Code))
> at
> java/util/concurrent/ThreadPoolExecutor.getTask(
> Code))
> at
> java/util/concurrent/ThreadPoolExecutor$
> at java/lang/
> Any *quick* help would be much appreciated, I'm a bit ost here.. :S
> Carlos
> --
> View this message in context:
> Sent from the ActiveMQ - User mailing list archive at

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message