uima-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eddie Epstein <eaepst...@gmail.com>
Subject Re: UIMA-AS consuming all the RAM
Date Sat, 26 Mar 2011 14:52:49 GMT
The memory traces don't show anything changing between the
intermediate and final snapshots. What am I missing?

One thing that can cause memory problems with UIMA applications is
that CAS memory use grows to cover the largest document encountered.
Having many CASes in the various CAS pools will compound the problems,
so it is important to limit their number to what are needed to support
concurrent processing. For example, why do the clients and services
have so many CASes each?

The total number of CASes described is large relative to the 8 CPUs in
the system. It is useful to deploy a single client and service and see
where the memory and CPU are going. For example, how much CPU is going
into CAS serialization vs useful processing? How much CPU is going
into Java garbage collection? Use jconsole to connect to the client
and service processes. UIMA-AS has mbeans that breakdown where CPU is
going.

Eddie



On Fri, Mar 25, 2011 at 5:28 AM, Arun Tewatia <arun.tewatia@orkash.com> wrote:
> Hi all,
>
> Eddie Epstein <eaepstein@...> writes:
>> Is this a single instance of the aggregate? If so I'm confused why a
>> single instance would take 10GB, the same as 10 instances of the
>> aggregate.
>>
>> >
>
> When i run the same scenario on UIMA, I run 7 instances of UIMA pipeline
> simultaneously on same single server.
>
>
>> > Is this a normal behavior of UIMA-AS or am i missing something. I increased
>> > ram to 24Gb too, still same problem occurs.
>>
>> Does it take much longer to run out of memory with 24GB? And when it
>> does, are all the service and client processes the same size as when
>> they started and the broker taking all of memory???
>> Eddie
>>
>
> Yes it does take more time to run out of memory in case when i use 24 GB. And
> yes all the service, client and broker processes are of almost the same size
> when they started. The top command stats for broker, client and service are as
> follows :
>
> ......... Broker ........
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
> 11575 root      20   0 2569m 389m  10m S    4  1.6   0:30.35
> 11575 root      20   0 2569m 389m  10m S    4  1.6   0:30.47
> 11575 root      20   0 2569m 389m  10m S    3  1.6   0:30.57
> .
> .
> 11575 root      20   0 2569m 424m  10m S    1  1.8   4:43.87
> 11575 root      20   0 2569m 424m  10m S    1  1.8   4:43.90
>
>
> ...... Client .........
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
> 13324 root      20   0 1272m 443m  11m S   12  1.8   0:25.75
> 13324 root      20   0 1272m 448m  11m S   20  1.9   0:26.35
> 13324 root      20   0 1272m 466m  11m S   26  1.9   0:28.71
> 13324 root      20   0 1273m 652m  11m S   13  2.7   7:30.39
> 13324 root      20   0 1273m 696m  11m S   47  2.9  23:35.19
> .
> .
> .
> .
> 13324 root      20   0 1273m 683m  11m S   59  2.8  23:48.18
> 13324 root      20   0 1273m 676m  11m S   54  2.8  23:53.68
>
>
> ....... Service .........
>
> Before client request, when only services were deployed
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
> 10822 root      20   0 1313m 277m  11m S    0  1.1   0:15.93
> 10822 root      20   0 1313m 277m  11m S    0  1.1   0:15.94
> .
> .
> .
>
> After the client request.......
>
>  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
> 10822 root      20   0 1315m 586m  11m S   57  2.4   0:21.81
> 10822 root      20   0 1315m 587m  11m S   43  2.4   0:23.10
> 10822 root      20   0 1440m 760m  14m S   65  3.1  20:51.97
> 10822 root      20   0 1440m 762m  14m S   82  3.2  21:01.38
> 10822 root      20   0 1408m 669m  11m S   62  2.8  25:56.98
> .
> .
> .
> 10822 root      20   0 1408m 731m  11m S    0  3.0  80:56.24
> 10822 root      20   0 1408m 731m  11m S    0  3.0  80:56.41
>
>
>
> Thanks !
> Arun Tewatia
>
>

Mime
View raw message