activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Torsten Mielke (Assigned) (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (AMQ-3665) Velocity's IntroSpectionCache causes OutOfMemoryError on large AMQ stores when running activem-admin journal-audit
Date Mon, 23 Jan 2012 13:34:42 GMT

     [ https://issues.apache.org/jira/browse/AMQ-3665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Torsten Mielke reassigned AMQ-3665:
-----------------------------------

    Assignee: Torsten Mielke
    
> Velocity's IntroSpectionCache causes OutOfMemoryError on large AMQ stores when running
activem-admin journal-audit
> ------------------------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-3665
>                 URL: https://issues.apache.org/jira/browse/AMQ-3665
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 5.5.1
>         Environment: AMQ persistence store, activemq-admin journal-audit
>            Reporter: Torsten Mielke
>            Assignee: Torsten Mielke
>              Labels: OOM, OutOfMemoryError, activmq-admin, journal-audit, velocity
>         Attachments: AMQ-3665.patch
>
>
> activemq-admin journal-audit can be used to dump the content of the AMQ store to system
out. The format of the output is rendered using Velocity.
> For large AMQ stores (e.g. 3GB) activemq-admin will run out of memory. 
> This is because Velocity internally uses an introSpectionCache that fills up over time
until heap memory is exhausted. 
> There is some documentation on that cache in the Velocity [Developers Guide|http://velocity.apache.org/engine/devel/developer-guide.html]
in section "Other Context Issues":
> {quote}
> One of the features provided by the VelocityContext (or any Context derived from AbstractContext)
is node specific introspection caching. Generally, you as a the developer don't need to worry
about this when using the VelocityContext as your context. However, there is currently one
known usage pattern where you must be aware of this feature.
> The VelocityContext will accumulate intropection information about the syntax nodes in
a template as it visits those nodes. So, in the following situation:
> - You are iterating over the same template using the same VelocityContext object.
> - Template caching is off.
> - You request the Template from getTemplate() on each iteration.
> It is possible that your VelocityContext will appear to 'leak' memory (it is really just
gathering more introspection information.) What happens is that it accumulates template node
introspection information for each template it visits, and as template caching is off, it
appears to the VelocityContext that it is visiting a new template each time. Hence it gathers
more introspection information and grows. It is highly recommended that you do one or more
of the following:
> - Create a new VelocityContext for each excursion down through the template render process.
This will prevent the accumulation of introspection cache data. For the case where you want
to reuse the VelocityContext because it's populated with data or objects, you can simply wrap
the populated VelocityContext in another, and the 'outer' one will accumulate the introspection
information, which you will just discard. Ex. VelocityContext useThis = new VelocityContext(
populatedVC ); This works because the outer context will store the introspection cache data,
and get any requested data from the inner context (as it is empty.) Be careful though - if
your template places data into the context and it's expected that it will be used in the subsequent
iterations, you will need to do one of the other fixes, as any template #set() statements
will be stored in the outermost context. See the discussion in Context chaining for more information.
> - Turn on template caching. This will prevent the template from being re-parsed on each
iteration, resulting the the VelocityContext being able to not only avoid adding to the introspection
cache information, but be able to use it resulting in a performance improvement.
> - Reuse the Template object for the duration of the loop iterations. Then you won't be
forcing Velocity, if the cache is turned off, to reread and reparse the same template over
and over, so the VelocityContext won't gather new introspection information each time.
> {quote}
> Right now the Velocity introSpectionCache grows with every entry read from the journal
until an OOM error is raised. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message