activemq-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Lionel Cons (Commented) (JIRA)" <>
Subject [jira] [Commented] (APLO-160) Apollo becoming unresponsive when stressed with 48k connections.
Date Fri, 17 Feb 2012 06:53:59 GMT


Lionel Cons commented on APLO-160:

Regarding the test I used, it's a simple single-threaded Perl script establishing N connections
and pushing STOMP frames to them as fast as possible. I ran it with N=1000 on ~80machines,
concurrently. I will later try to use stomp-benchmark with an ad-hoc scenario instead.

The latest snapshot (33), with the default TCP settings and the same test seems to show a
very different behavior: the broker is very slow and many connections time out. Of course,
because far less connections get through, there is no memory problem anymore.

OTOH, reducing the buffer sizes (I uses 8k each) greatly improved the situation. On this subject,
see APLO-163.
> Apollo becoming unresponsive when stressed with 48k connections.
> ----------------------------------------------------------------
>                 Key: APLO-160
>                 URL:
>             Project: ActiveMQ Apollo
>          Issue Type: Bug
>         Environment: apollo-1.1-20120209.032648-24
>            Reporter: Lionel Cons
>            Assignee: Hiram Chirino
>         Attachments: apollo.dump
> While running a stress test against apollo-1.1-20120209.032648-24 (many concurrent TCP
connections), the broker became unresponsive.
> It logged several times: java.lang.OutOfMemoryError: GC overhead limit exceeded
> It also logged other warnings, probably related:
> 2012-02-14 14:14:49,273 | WARN  | handle failed | | Apollo Task
> 2012-02-14 14:18:39,073 | WARN  | Problem scavenging sessions | org.eclipse.jetty.server.session
| HashSessionScavenger-0
> It could not be stopped either, I had to kill -9 it.
> What can be done to avoid these problems?
> FWIW, java has been started with -server -Xmx8192m -Xms4096m

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:!default.jspa
For more information on JIRA, see:


View raw message