activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "James Strachan" <>
Subject Re: Message queue is filled up.
Date Fri, 07 Jul 2006 05:53:57 GMT
On 7/7/06, Kuppe <> wrote:
> My sincere apologies, it seems that the "Message queue is filled up" error
> message is not coming from ActiveMQ at all. It is coming from my own
> framework that is not able to process the number of messages being delivered
> by ActiveMQ:)

LOL! :) Thanks for letting us know - phew, there's not a bug in our
error message reporting :)

> Due to limitations in our previous messaging solution, I have an incoming
> and outgoing queue for receiving and sending messages implemented in my
> framework. As i am now using async dispatch and async sending, these queues
> are somehow redundant. At the same time, the throughput of my framework also
> seems to be somewhat limited with all the context switching and processing.


> Accordingly I would like to reduce this complexity by removing my framework
> queues and rely entirely on the async dispatch/send that is implemented in
> ActiveMQ. Is this an approach that you would recommend?

Definitely. The sync v async configuration is something you may wish
to change over time (on a per connection/producer/consumer basis).
e.g. for broker dispatching to consumers, doing it synchronous is
often a bit faster as it reduces context switching, though it
increases the chance of a slow consumer blocking other consumers for a
little while. Leaving all this stuff to the messaging system makes it
easier for you to tweak things (often via a URI) and remove some of
the layers in your code.

> At the same time, my
> framework does offer me control as to how many threads are processing the
> messages in these two receive and send queues. If I am to replace my
> framework queues with the async functionality of ActiveMQ, i would like to
> know the answer to the following two related questions;
> Q1: When using async send, is this doing the message sending in another
> thread context?


>  If so, is there any configuration options for limiting the
> number of threads processing the sending of messages? What is the default
> algorithm for the number of sending threads - one per session, producer???

We use thread pools for pretty much everything now to minimise the
number of actual threads used; we've done quite a bit of tuning in
that regard in 4.x. (In 3.x there tended to be lots of threads created
in a transport, session etc).

Whether you use sync or async in the JMS client there is typically
(for tcp:// and vm:// at least) a thread sending and a thread
receiving messages. All these threads are doing is streaming messages
onto/off of a socket so there is no real reason to pool them (e.g.
having 2 threads writing would generally be slower).

Async send just means the thread doing the producer.send() doesn't
block waiting for a response from the broker; so there's no real
difference from the number of threads used etc; it just removes the
latency in the send() thread.

> Q2: In line with Q1, if i am using async dispatch, is this doing the message
> dispatching in another thread context?

Yes. e.g. the default in non durable topics is that the thread
processing incoming messages from a producer will dispatch the message
to each consumers socket. With async dispatch a thread pool is used to
configure this.

> If so, is there any configuration
> options for limiting the number of threads processing the dispatching of
> messages?

There could well be - not sure off the top of my head :)

> What is the default algorithm for the number of dispatchin threads
> - one per session, consumer???

This is a broker side thing so its purely to do with how many
consumers there are; each async dispatch consumer is a separate task
which is added to the thread pool.

Its on the client side that all of a sessions's messages are
dispatched to consumers in same thread (but since we use a thread pool
if you have many sessions we may not use that many threads).


View raw message