activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clebert Suconic <clebert.suco...@gmail.com>
Subject Re: Apache Artemis - Stress-test time
Date Wed, 22 Mar 2017 18:55:01 GMT
it's MQTT.. I would recommend you using block for these address
though.. unless you really need paging.

or a bigger maxSize and bigger memory.

On Wed, Mar 22, 2017 at 1:39 PM, Francesco PADOVANI
<Francesco.PADOVANI@bticino.it> wrote:
> Hi all,
>
> we're currently performing some stress-test sessions against our Apache Artemis instance.
> To do this, we have developed a java program which uses the paho library. We are simulating
(just to start) 1000 subscriber clients each one who connects in qos2 and subscribes 50 specific
topics dedicated to him, and other 1000 publisher clients each one who connects in qos2 and
 publishes a retained message every second on a specific topic inside the range of the 50
of its own dedicated subscriber.
>
>
> Hence, for every publisher there's a subscriber: each subscriber subscribes 50 topics.
Each publisher publishes on a single specific topic. The total amount of topics (addresses)
involved are 1000x50 = 50000.
>
>
> Our Artemis instance runs on a Centos 7 Server (64bit) with 8GB of RAM (4GB Xms and Xmx
heap) and 4 vCpu. Max number of processes for the artemis process is 4096. Max open files
are 64000.
>
>
> We are able to start 1000 publishers and 1000 subscribers. Cpu is fine. Memory increases
time by time. At a certain point (after about 5 mins) heap goes full and Artemis starts to
page... and this is its death! When the broker starts to page, it becomes unusable. Almost
all connected clients are disconnected, ram is never more freed and the file system partition
used for paging (no SSD unfortunately) increases time by time since it reaches 100%.
>
>
> As I said, Artemis works well until it starts to page. Exactly in the moment I see the
following lines inside log:
>
> ...
>
> AMQ222038: Starting paging on address '.cro.plantid.1075528.gwid.50437.device_state';
size is
>
> currently: 1,521 bytes; max-size-bytes: -1
>
> AMQ222038: Starting paging on address '.cro.plantid.1075520.gwid.50451.device_state';
size is currently: 1,521 bytes; max-size-bytes: -1
>
> ...
>
> AMQ222038: Starting paging on address '$sys.mqtt.queue.qos2.50687s'; size is currently:
107,379 bytes; max-size-bytes: -1
>
>
> It seems that addresses related to the topics (like "cro.plantid.1075520.gwid.50451.device_state")
are always of the same size (I think because there are always only a publisher and a subscriber
and on topics are sent only retained messages, where the last one replace the previous one).
>
> Instead, "system" addresses (like "$sys.mqtt.queue.qos2.50687s") grow constantly.
>
>
> Please, could someone explain me how to manage system addresses and how to do in order
they don't grow so much? maybe I'm wrong somewhere inside the configuration? Which is the
better configuration for my specific case (the one we are testing as above)?
>
>
> Thanks so much in advance, guys.
>
>
> Francesco
>
>
> ________________________________
>
> Ce message, ainsi que tous les fichiers joints à ce message, peuvent contenir des informations
sensibles et/ ou confidentielles ne devant pas être divulguées. Si vous n'êtes pas le destinataire
de ce message (ou que vous recevez ce message par erreur), nous vous remercions de le notifier
immédiatement à son expéditeur, et de détruire ce message. Toute copie, divulgation, modification,
utilisation ou diffusion, non autorisée, directe ou indirecte, de tout ou partie de ce message,
est strictement interdite.
>
>
> This e-mail, and any document attached hereby, may contain confidential and/or privileged
information. If you are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any unauthorized, direct or
indirect, copying, disclosure, distribution or other use of the material or parts thereof
is strictly forbidden.



-- 
Clebert Suconic

Mime
View raw message