activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Bain <>
Subject Re: Slow MySQL datastore performance.
Date Thu, 22 Oct 2015 13:35:48 GMT
In your example of 100 consumers, I would expect that when a message is
received 100 ack rows would be inserted (ideally in a single transaction),
and when each consumer receives its copy of the message one row would be
updated (so when they all have finally received it, you'll have done 100
updates).  I'm assuming this all makes sense and your question is "why is
each update a separate transaction?"

Batching updates made within a very short time window into a single
transaction is possible, but 1) it's more complicated code because you need
to coordinate the requested updates across threads, and 2) you increase
latency for some threads because the early ones in the window have to wait
till the window closes and the database update is issued before taking
their next action (which may depend on the knowledge that the previous ack
has been durably recorded), which might be worse than the overhead of
starting a transaction.

Any solution that wanted to batch several updates from the same thread
(i.e. allow the thread to continue working asynchronously) would have to
ensure that the actions taken thereafter would not be invalid if the broker
crashed without writing them to the database (which I think would be
difficult) or would need to relax the guarantees that ActiveMQ makes about
how messages are handled (probably not going to happen just to improve
performance of a store technology that's infrequently used).
On Oct 21, 2015 8:47 AM, "will1" <> wrote:

> Thanks Tim.. If anyone has any relevant benchmarks they could provide, that
> would be really useful.. any more thoughts on the original question would
> be
> appreciated also..
> --
> View this message in context:
> Sent from the ActiveMQ - User mailing list archive at

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message