activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Elliot Barlas <elliotbar...@gmail.com>
Subject Re: 50k + messages stuck in queue with all consumers blocking on receive
Date Wed, 24 Feb 2010 17:17:48 GMT

Okay, the issue I am seeing is slightly different then.  In my case the
broker isn't even dispatching some messages, at least according to the
broker stats.  Dequeued = Dispatched and Dispatched < Enqueued.

Thanks,
Elliot


Maarten_D wrote:
> 
> It appears that our problem had to do with prefetching. We had our
> prefetch values for queues set fairly high, and when I client application
> would crash, a bunch of messages that were prefetched, but not ack'ed,
> would remain stuck in the queue. We added
> "jms.prefetchPolicy.queuePrefetch=1" to our connection uri, and haven't
> seen this behaviour since.
> 
> 
> Elliot Barlas wrote:
>> 
>> Hey Maarten, I am observing the same behavior in my AMQ application.  Was
>> your problem resolved?  I've tried a few different connection factory
>> approach to no avail :(
>> 
>> Thanks,
>> Elliot
>> 
>> 
>> 
>> Maarten_D wrote:
>>> 
>>> Hi Rob,
>>> Sorry, I'd killed that particular JVM before I read your request, and
>>> the error hasn't reoccurred since then.
>>> Something else that I've been thinking about since reading a 
>>> http://tmielke.blogspot.com/2009/12/using-spring-jms-template-for-sending.html
>>> blog post : we used the ActiveMQ PooledConnectionFactory for
>>> establishing all connections to the broker, for the Spring message
>>> listener containers as well as for the JMS templates.
>>> After diving into the code of the PooledconnectionFactory, this seems to
>>> have been a bad idea. When a connection is requested, the PCF simply
>>> creates a new one and hands it out. When the pool is full, it returns
>>> the first connection in its list (ie the first one it created), removes
>>> it from the top of the list and adds it to the bottom. This means that,
>>> if your listeners also get connections from this pool (and remember,
>>> listeners hold on to their connections), a whole bunch of JMS template
>>> calls will be made where the template is sending messages using the same
>>> connector used by a listener.
>>> I'm not too sure about the details, but when you introduce
>>> producerflowcontrol into this picture, I can imagine how a kind of
>>> deadlock can occur where eventually all producers on all connections are
>>> throttled, leaving no one able to send any messages.
>>> Does this sound like a plausible scenario?
>>> 
>>> We've modified our config and given all listener containers a connection
>>> that's not in the pool, and are now running another test. I'll post the
>>> results.
>>> 
>>> Regards,
>>> Maarten
>>> 
>>> PS. Of course, you might say we should've know about this beforehand, as
>>> the PCF javadoc says that its not really meant for consumers. The
>>> alternative it offers is Jencks, although that project has been dead for
>>> a while, to the point where even the homepage is now a spam site
>>> (jencks.org). So clearly that isn't a viable alternative.
>>> 
>>> 
>>> rajdavies wrote:
>>>> 
>>>> Can you take a thread dump whilst its in this state - and send us the  
>>>> output ?
>>>> 
>>>> thanks,
>>>> 
>>>> Rob
>>>> On 21 Jan 2010, at 17:26, Maarten_D wrote:
>>>> 
>>>>>
>>>>> O, and I forgot to mention I also turned on async sends
>>>>> (jms.useAsyncSend=true)
>>>>>
>>>>> Maarten_D wrote:
>>>>>>
>>>>>> I've now changed my activemq.xml to the listing below, made the 

>>>>>> session
>>>>>> transacted and set the acknowledge mode to SESSION_TRANSACTED.
>>>>>>
>>>>>> Things were going well for me for a while, with the system  
>>>>>> processing 3,2
>>>>>> million messages without a hitch, and then everything stopped  
>>>>>> because the
>>>>>> first component in the chain got lots of these:
>>>>>>
>>>>>> javax.jms.InvalidClientIDException: Broker: broker - Client:
>>>>>> ID:rhost-59116-1263927611185-1:445 already connected from / 
>>>>>> 127.0.0.1:56560
>>>>>>
>>>>>> And for an hour now, since it stopped processing messages, the  
>>>>>> broker has
>>>>>> been eating up almost 100% of the cpu for some reason I can't quite
 
>>>>>> fathom
>>>>>> (disk utilization is very low, and there is no message traffic  
>>>>>> passing
>>>>>> through the broker).
>>>>>>
>>>>>> <beans  xmlns="http://www.springframework.org/schema/beans"
>>>>>> xmlns:amq="http://activemq.apache.org/schema/core"
>>>>>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
>>>>>> xsi:schemaLocation="http://www.springframework.org/schema/beans
>>>>>> http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
>>>>>> http://activemq.apache.org/schema/core
>>>>>> http://activemq.apache.org/schema/core/activemq-core.xsd
>>>>>> http://mortbay.com/schemas/jetty/1.0
>>>>>> http://jetty.mortbay.org/jetty.xsd 
>>>>>> ">
>>>>>>
>>>>>>  <bean
>>>>>> class 
>>>>>> = 
>>>>>> "org 
>>>>>> .springframework.beans.factory.config.PropertyPlaceholderConfigurer">
>>>>>>    <property name="location" value="file:/etc/broker.properties"
/>
>>>>>>  </bean>
>>>>>>
>>>>>>  <broker id="broker" useJmx="true" brokerName="${broker.name}"
>>>>>> start="true" xmlns="http://activemq.apache.org/schema/core"
>>>>>> dataDirectory="${activemq.data}">
>>>>>>
>>>>>>    <destinationPolicy>
>>>>>>      <policyMap>
>>>>>>        <policyEntries>
>>>>>>          <policyEntry queue=">" memoryLimit="32mb"
>>>>>> strictOrderDispatch="true" producerFlowControl="false">
>>>>>>            <pendingQueuePolicy>
>>>>>>            	<vmQueueCursor />
>>>>>>            </pendingQueuePolicy>
>>>>>>          </policyEntry>
>>>>>>          <policyEntry topic=">" memoryLimit="32mb"
>>>>>> producerFlowControl="true" />
>>>>>>        </policyEntries>
>>>>>>      </policyMap>
>>>>>>    </destinationPolicy>
>>>>>>
>>>>>>    <managementContext>
>>>>>>      <managementContext useMBeanServer="true"
>>>>>>                         jmxDomainName="org.apache.activemq"
>>>>>>                         createMBeanServer="true"
>>>>>>                         createConnector="false"
>>>>>>                         connectorPort="1100"
>>>>>>                         connectorPath="/jmxrmi"/>
>>>>>>    </managementContext>
>>>>>>
>>>>>>    <persistenceAdapter>
>>>>>>      <kahaDB directory="${activemq.data}/${broker.name}"
>>>>>>              journalMaxFileLength="32mb"
>>>>>>              enableJournalDiskSyncs="false"
>>>>>>              indexWriteBatchSize="1000"
>>>>>>              indexCacheSize="1000"/>
>>>>>>    </persistenceAdapter>
>>>>>>
>>>>>>    <systemUsage>
>>>>>>      <systemUsage>
>>>>>>        <memoryUsage>
>>>>>>          <memoryUsage limit="512mb" />
>>>>>>        </memoryUsage>
>>>>>>      </systemUsage>
>>>>>>    </systemUsage>
>>>>>>
>>>>>>    <transportConnectors>
>>>>>>      <transportConnector uri="nio://0.0.0.0:61616" />
>>>>>>    </transportConnectors>
>>>>>>  </broker>
>>>>>>
>>>>>>  <jetty xmlns="http://mortbay.com/schemas/jetty/1.0">
>>>>>>    <connectors>
>>>>>>      <nioConnector port="61617"/>
>>>>>>    </connectors>
>>>>>>    <handlers>
>>>>>>      <webAppContext contextPath="/admin"
>>>>>> resourceBase="${activemq.base}/webapps/admin" logUrlOnStart="true"/>
>>>>>>    </handlers>
>>>>>>  </jetty>
>>>>>> </beans>
>>>>>>
>>>>>
>>>>> -- 
>>>>> View this message in context:
>>>>> http://old.nabble.com/50k-%2B-messages-stuck-in-queue-with-all-consumers-blocking-on-receive-tp27162095p27261393.html
>>>>> Sent from the ActiveMQ - User mailing list archive at Nabble.com.
>>>>>
>>>> 
>>>> Rob Davies
>>>> http://twitter.com/rajdavies
>>>> I work here: http://fusesource.com
>>>> My Blog: http://rajdavies.blogspot.com/
>>>> I'm writing this: http://www.manning.com/snyder/
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> 
>> 
>> 
> 
> 

-- 
View this message in context: http://old.nabble.com/50k-%2B-messages-stuck-in-queue-with-all-consumers-blocking-on-receive-tp27162095p27714254.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.


Mime
View raw message