activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <tb...@alumni.duke.edu>
Subject Re: Performance degrade issue using ActiveMQ scheduler
Date Mon, 25 May 2015 14:16:18 GMT
On May 25, 2015 2:51 AM, "contezero74" <eros.pedrini@gmail.com> wrote:
>
> Hi Tim,
> your suggestion was one of the first tentative we perform: but the result
> doesn't change. The problem seems very related to the use of the ActiveMQ
> schedule-area with a great amount of messages.

My suggestion was that you subdivide your future messages based on time, so
that only a small number of them are actually scheduled messages (and the
rest are normal messages, which the Camel route will schedule later, when
the beginning of their holding queue's time window arrives).  So either you
haven't implemented what I suggested, or the problem is unrelated to the
number of scheduled messages (if drastically cutting that number had no
effect).

In fact, you could use my suggestion to avoid message scheduling entirely,
by having the Camel route not process a bucket till the end of each
bucket's time window instead of the beginning, if you can afford the small
delay in processing the messages; if that's still just as slow, you can be
very sure message scheduling isn't to blame.  (This might be what you
described in the next paragraph, but I wasn't sure what you meant by
connecting queues to each other so I wasn't sure.)

> At the moment, we are implementing a work-around based on
persistence-queues
> and application timers: when the timer is triggered, it connect the
> persistence-queue assigned to it to the queue T. In small it seems to
work:
> now we are cleaning the code and continuing the tests.
>
> Returning to the initial problem: with VisualVM we weren't able to
spot-out
> the problem: the memory usage and the load distribution among the
different
> classes is almost the same in the two scenarios.

VisualVM has a Profiler tab that lets you see how much CPU each method took
and how many calls it makes.  See https://visualvm.java.net/profiler.html
for more info.  Since we're not sure what the bottleneck is, this may or
may show what the problem is, but at least look to see if there are methods
with a lot of Self Time or a lot of invocations and see if you can tie
those to scheduled messages.

Another technique you can use is to attach Eclipse to your running broker
and pause all threads and look through each one's call stack to see what
it's doing.  Find the one that is doing something related to scheduled
messages, and then resume all threads and repeatedly pause and resume that
one thread to see where it is each time.  After a dozen times doing that,
if the thread is always in the same method call each time, you can assume
it's a bottleneck and start digging into why.  Make sure you do this on a
non-production broker (or that it's OK to pause a production broker while
you do this, which seems unlikely), and download the ActiveMQ source files
for your version (5.10.0, right?) and attach them as source to your
ActiveMQ JAR so you can see the context of why a given call is made.

> I hope to have time to return as soon as I can to analyse the root cause
of
> our problem.
>
> P.S. to be honest I  wondering if it was not only an our misunderstanding
of
> ActiveMQ capabilities and in fact it wasn't design for a so massive use of
> the scheduled-area
>
> cheers,
> Eros

ActiveMQ's scheduled messages may not be optimized yet for this volume, but
I'm not sure they haven't been *designed* for it.  Hopefully with your help
we can tune them so they can accommodate this load (if indeed that is the
problem), and if it turns out the design itself can't accommodate this
volume, then hopefully redesigning the algorithm won't be hard.

> --
> View this message in context:
http://activemq.2283324.n4.nabble.com/Performance-degrade-issue-using-ActiveMQ-scheduler-tp4696754p4696899.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message