uima-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Richard Eckart de Castilho <richard.eck...@gmail.com>
Subject Re: Running Uima in Tomcat (memory issue)
Date Wed, 17 Jul 2013 07:04:56 GMT
Am 17.07.2013 um 05:11 schrieb swirl <swirlobt@yahoo.com>:

> I am wrapping a Uima analysis engine in a Tomcat JSF webapp.
> This AE loads and parses a large model file (300Mb).
> I can call the AE and run it using SimplePipeline.runPipeline() via the 
> webapp UI.
> However, the large model took up a large memory chunk that won't go away even 
> after the AE is run to completion. 
> Does Uima do any clean up of in-memory object instances after the AE is 
> completed?

UIMA doesn't clean up anything. It leaves this job to the garbage collection
facility of the JVM. That said, I know of nothing that UIMA would do to
prevent garbage collection.

If your model is actually a serialized Java object, check that it doesn't 
write to static variables and in that way prevents itself from being 
garbage collected.

In a webapp context, to avoid long initialization times, I would
recommend creating an instance of the AnalysisEngine and keep it
around. Use some queuing to make sure it never used by more than
one request at a time. UIMA AEs are not really thread safe.  
No worries about garbage collection here, because the AE will
live as long as your application is running.

Alternatively maintain a pool of AnalysisEngines that may grow if
the number of concurrent users grow. When the number of concurrent
users get less, the pool should automatically shut down unused
AnalysisEngines after some idle time. In that case, you'd want
to worry about garbage collection again.

It may be possible to encapsulate the large model in an external
resource and share it amongst your AnalysisEngine instances via a
shared ResourceManager. If you use third-party UIMA components,
it is quite likely that you have to extend them to use external
resources. The only components I know that currently wrap models in
external resources are the OpenNLP UIMA components. Even there,
I'm not sure though if actual model wrapping implementation
allows to safe memory.

However, mind that the model itself may not be thread safe.


-- Richard
View raw message