activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Joe Smith <>
Subject Not able to load-balance messages in a cluster using PrefetchPolicy
Date Wed, 08 Jun 2011 21:31:43 GMT



We are using 5.4.2 running on Linux w/Java 1.6.   The cluster has 3 primary brokers
and corresponding slaves (residing on separate hosts).  We have 3 msg consumer processes 
(using Springframework w/3 concurrent
threads each), connected to each broker. 
Occasionally a consumer process gets a message that has long running
time due to interaction with another external system.  So a msg could take up to 20 min to
process (waiting for the
other system to complete their task).


To better load balance the msgs - to prevent msgs from being
stuck behind an earlier/long-running msg, we use the
prefetchPolicy.queuePrefetch=1 configuration on the ConnectionFactory.


The prefetch works - where newer msgs get distributed to
other consumer threads that are free to work in a single consumer process.  It looks like
the msgs are dispatched
to the 3 consumer processes (instead of 9 consumer threads) in a round-robin
fashion in the cluster.  Since we
have 3 independent consumer processes (spring/jvm), we have seen multiple msgs
get dispatched to one consumer process and waiting, while the other 2 consumer
processes connected to the other 2 brokers are idle.


Is there a correct configuration that will distribute the
msgs from one broker to another without being stuck behind a long-running
consumer thread?  We can work
around the issue by increasing the number of consumer threads within each
consumer process, but thought there could be a better, more efficient solution.


We have used the following config params:


- ConnectionFactory

-       prefetchPolicy

-       dispatchAsync
= false

-       useAsyncSend
= false

-       alwaysSessionAsync
= false


- springframework DefaultMessageListenerContainer

-       receiveTimeout
= 0


- broker config xml

-       conduitSubscriptions="false"

-       we
added ?jms.prefetchPolicy.all=1 to the NetworkConnector's uri attribute, we
also tried with just .queuePrefetch instead of .all.

-       we've
tried different strict ordering dispatch policy - made no difference


Things we've noticed.

-       In
springframework we can set the value for prefetchPolicy to either 1 or 0, but
the behavior seems to be the same. 
However, we could not use value 0 when we use our pure Java client as

-       Msgs
are dispatched to the 3 brokers in a more or less round-robin fashion.  We have notice occasionally
that each
consumer process still gets a batch of msgs.  Once it gets a batch, it does not release any
queued msgs to
other brokers/consumers that are free.


There are many configuration parameters.  We may have missed some critical ones.


Thanks for your help.



Attached is the springframework config.  It's one file but I duplicated the
beans for ConnectionFactory, listener, containers, etc to simulate the 3
independent processes.  The number
of thread are all set to one for testing purposes.


The activemq.xml is pretty much the default.

I used a Java client to send a msg: the payload is just a number.  The msg listener reads
the number and sleeps for that many seconds.  This way I can enqueue a msg with large number
followed by msgs w/smaller numbers. 

A sample run shows one thread is processing multiple msgs while consumers in the other containers
are idle.
Thread-2 [1] sleep 2
Thread-2 [1] wake  2
Thread-4 [1] sleep 1
Thread-3 [1] sleep 2
Thread-2 [2] sleep 3
Thread-4 [1] wake  1
Thread-4 [2] sleep 4
Thread-3 [1] wake  2
Thread-2 [2] wake  3
Thread-4 [2] wake  4
Thread-4 [3] sleep 5
Thread-4 [3] wake  5
Thread-4 [4] sleep 6
Thread-4 [4] wake  6
Thread-4 [5] sleep 7
Thread-4 [5] wake  7
Thread-4 [6] sleep 8
Thread-4 [6] wake  8
Thread-4 [7] sleep 9
Thread-4 [7] wake  9
Thread-4 [8] sleep 10
Thread-4 [8] wake  10

enter q to exit> 
Thread-3 [2] sleep 10
Thread-2 [3] sleep 20
Thread-4 [9] sleep 10
Thread-3 [2] wake  10
Thread-3 [3] sleep 5
Thread-4 [9] wake  10
Thread-3 [3] wake  5
Thread-3 [4] sleep 5
Thread-3 [4] wake  5
Thread-3 [5] sleep 5
Thread-2 [3] wake  20  <-- t2 is wake and idle
Thread-3 [5] wake  5    <-- t3 process batch of msgs 
Thread-3 [6] sleep 5
Thread-3 [6] wake  5
Thread-3 [7] sleep 5
Thread-3 [7] wake  5
Thread-3 [8] sleep 5
Thread-3 [8] wake  5
Thread-3 [9] sleep 5
Thread-3 [9] wake  5
Thread-3 [10] sleep 5
Thread-3 [10] wake  5
Thread-3 [11] sleep 5
Thread-3 [11] wake  5
Thread-3 [12] sleep 5
Thread-3 [12] wake  5
Thread-3 [13] sleep 5
Thread-3 [13] wake  5
Thread-3 [14] sleep 5
Thread-3 [14] wake  5

View raw message