activemq-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Klaus Pittig (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (AMQ-6115) No more browse/consume possible after #checkpoint run
Date Wed, 13 Jan 2016 15:49:39 GMT

     [ https://issues.apache.org/jira/browse/AMQ-6115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Klaus Pittig updated AMQ-6115:
------------------------------
    Description: 
We are currently facing a problem when Using ActiveMQ with a large number of Persistence Queues
(250) á 1000 persistent TextMessages á 10 KB.
Our scenario requires these messages to remain in the storage over a long time (days), until
they are consumed (large amounts of data are staged for distribution for many consumer, that
could be offline for some days).

This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, LevelDB) with enough
free space and memory.
We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.

After the Persistence Store is filled with these Messages (we use a simple unit test for production
always the same message) and a broker restart, we can browse/consume some Queues  _until_
the #checkpoint call after 30 seconds.

This call causes the broker to use all available memory and never releases it for other tasks
such as Queue browse/consume. Internally the MessageCursor seems to decide, that there is
not enough memory and stops delivery of queue content to browsers/consumers.

=> Is there a way to avoid this behaviour of fix this? 
The expectation is, that we can consume/browse any queue under all circumstances.

Besides the above mentioned settings we use the following settings for the broker (btw: changing
the memoryLimit to a lower value like 1mb does not change the situation):
{code:xml}
        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry queue=">" producerFlowControl="false"
optimizedDispatch="true" memoryLimit="128mb">
                  <dispatchPolicy>
                    <strictOrderDispatchPolicy />
                  </dispatchPolicy>
                  <pendingQueuePolicy>
                    <storeCursor/>
                  </pendingQueuePolicy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>

        <systemUsage>
            <systemUsage sendFailIfNoSpace="true">
                <memoryUsage>
                    <memoryUsage limit="500 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="80000 mb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="1000 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>
{code}

If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher value like
*150* or *600* depending on the difference between memoryUsage and the available heap space
relieves the situation a bit for a workaround, but this is not really an option for production
systems in my point of view.

Attached some information from Oracle Mission Control and JProfiler showing those ActiveMQTextMessage
instances that are never released from memory.

h4. 2016013 - F'up due to our solution described in the comments (setting useCache="false"
and expireMessagesPeriod="0") :

In my point of view, it's worth to discuss the one and only memoryLimit parameter used for
both the regular browse/consume threads and the checkpoint/cleanup threads. 
There should always be enough space to browse/consume any queue at least with prefetch 1 resp.
one of the next pending messages.
Maybe - in this case - 2 well-balanced memoryLimit parameters with priority on consumption
instead of checkpoint/cleanup are helpful for a a better regulation. Or something near it.



  was:
We are currently facing a problem when Using ActiveMQ with a large number of Persistence Queues
(250) á 1000 persistent TextMessages á 10 KB.
Our scenario requires these messages to remain in the storage over a long time (days), until
they are consumed (large amounts of data are staged for distribution for many consumer, that
could be offline for some days).

This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, LevelDB) with enough
free space and memory.
We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.

After the Persistence Store is filled with these Messages (we use a simple unit test for production
always the same message) and a broker restart, we can browse/consume some Queues  _until_
the #checkpoint call after 30 seconds.

This call causes the broker to use all available memory and never releases it for other tasks
such as Queue browse/consume. Internally the MessageCursor seems to decide, that there is
not enough memory and stops delivery of queue content to browsers/consumers.

=> Is there a way to avoid this behaviour of fix this? 
The expectation is, that we can consume/browse any queue under all circumstances.

Besides the above mentioned settings we use the following settings for the broker (btw: changing
the memoryLimit to a lower value like 1mb does not change the situation):
{code:xml}
        <destinationPolicy>
            <policyMap>
              <policyEntries>
                <policyEntry queue=">" producerFlowControl="false"
optimizedDispatch="true" memoryLimit="128mb">
                  <dispatchPolicy>
                    <strictOrderDispatchPolicy />
                  </dispatchPolicy>
                  <pendingQueuePolicy>
                    <storeCursor/>
                  </pendingQueuePolicy>
                </policyEntry>
              </policyEntries>
            </policyMap>
        </destinationPolicy>

        <systemUsage>
            <systemUsage sendFailIfNoSpace="true">
                <memoryUsage>
                    <memoryUsage limit="500 mb"/>
                </memoryUsage>
                <storeUsage>
                    <storeUsage limit="80000 mb"/>
                </storeUsage>
                <tempUsage>
                    <tempUsage limit="1000 mb"/>
                </tempUsage>
            </systemUsage>
        </systemUsage>
{code}

If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher value like
*150* or *600* depending on the difference between memoryUsage and the available heap space
relieves the situation a bit for a workaround, but this is not really an option for production
systems in my point of view.

Attached some information from Oracle Mission Control and JProfiler showing those ActiveMQTextMessage
instances that are never released from memory.



> No more browse/consume possible after #checkpoint run
> -----------------------------------------------------
>
>                 Key: AMQ-6115
>                 URL: https://issues.apache.org/jira/browse/AMQ-6115
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: activemq-leveldb-store, Broker, KahaDB
>    Affects Versions: 5.5.1, 5.11.2, 5.13.0
>         Environment: OS=Linux,MacOS,Windows, Java=1.7,1.8, Xmx=1024m, SystemUsage Memory
Limit 500 MB, Temp Limit 1 GB, Storage 80 GB
>            Reporter: Klaus Pittig
>         Attachments: Bildschirmfoto 2016-01-08 um 12.09.34.png, Bildschirmfoto 2016-01-08
um 13.29.08.png
>
>
> We are currently facing a problem when Using ActiveMQ with a large number of Persistence
Queues (250) á 1000 persistent TextMessages á 10 KB.
> Our scenario requires these messages to remain in the storage over a long time (days),
until they are consumed (large amounts of data are staged for distribution for many consumer,
that could be offline for some days).
> This issue is independent of the JVM,  OS and PersistentAdapter (KahaDB, LevelDB) with
enough free space and memory.
> We tested this behaviour with ActiveMQ: 5.11.2, 5.13.0 and 5.5.1.
> After the Persistence Store is filled with these Messages (we use a simple unit test
for production always the same message) and a broker restart, we can browse/consume some Queues
 _until_ the #checkpoint call after 30 seconds.
> This call causes the broker to use all available memory and never releases it for other
tasks such as Queue browse/consume. Internally the MessageCursor seems to decide, that there
is not enough memory and stops delivery of queue content to browsers/consumers.
> => Is there a way to avoid this behaviour of fix this? 
> The expectation is, that we can consume/browse any queue under all circumstances.
> Besides the above mentioned settings we use the following settings for the broker (btw:
changing the memoryLimit to a lower value like 1mb does not change the situation):
> {code:xml}
>         <destinationPolicy>
>             <policyMap>
>               <policyEntries>
>                 <policyEntry queue=">" producerFlowControl="false"
> optimizedDispatch="true" memoryLimit="128mb">
>                   <dispatchPolicy>
>                     <strictOrderDispatchPolicy />
>                   </dispatchPolicy>
>                   <pendingQueuePolicy>
>                     <storeCursor/>
>                   </pendingQueuePolicy>
>                 </policyEntry>
>               </policyEntries>
>             </policyMap>
>         </destinationPolicy>
>         <systemUsage>
>             <systemUsage sendFailIfNoSpace="true">
>                 <memoryUsage>
>                     <memoryUsage limit="500 mb"/>
>                 </memoryUsage>
>                 <storeUsage>
>                     <storeUsage limit="80000 mb"/>
>                 </storeUsage>
>                 <tempUsage>
>                     <tempUsage limit="1000 mb"/>
>                 </tempUsage>
>             </systemUsage>
>         </systemUsage>
> {code}
> If we set the *cursorMemoryHighWaterMark* in the destinationPolicy to a higher value
like *150* or *600* depending on the difference between memoryUsage and the available heap
space relieves the situation a bit for a workaround, but this is not really an option for
production systems in my point of view.
> Attached some information from Oracle Mission Control and JProfiler showing those ActiveMQTextMessage
instances that are never released from memory.
> h4. 2016013 - F'up due to our solution described in the comments (setting useCache="false"
and expireMessagesPeriod="0") :
> In my point of view, it's worth to discuss the one and only memoryLimit parameter used
for both the regular browse/consume threads and the checkpoint/cleanup threads. 
> There should always be enough space to browse/consume any queue at least with prefetch
1 resp. one of the next pending messages.
> Maybe - in this case - 2 well-balanced memoryLimit parameters with priority on consumption
instead of checkpoint/cleanup are helpful for a a better regulation. Or something near it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message