activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ilkka Virolainen <Ilkka.Virolai...@bitwise.fi>
Subject RE: Artemis 2.4.0 - Issues with memory leaks and JMS message redistribution
Date Thu, 08 Mar 2018 11:05:49 GMT
An update: the errors were caused by a peculiarity in our environment. Anyway, looking at the nature of the changes between builds it is somewhat obvious this issue couldn't have been caused by the related commits. So far I've been unable to replicate the original issue, being the cluster connection dying to a concurrentmodificationexception so I remain optimistic.

Thank you!

Best regards,
- Ilkka

-----Original Message-----
From: Ilkka Virolainen [mailto:Ilkka.Virolainen@bitwise.fi] 
Sent: 8. maaliskuuta 2018 10:29
To: users@activemq.apache.org
Subject: RE: Artemis 2.4.0 - Issues with memory leaks and JMS message redistribution

Trying out master results in weird issues:

Broker A logs a very large amount of this:

10:06:37,292 ERROR [org.apache.activemq.artemis.core.client] AMQ214031: Failed to decode buffer, disconnect immediately.: java.lang.IllegalStateException: java.lang.IllegalArgumentException: AMQ119032: Invalid type: 1
        at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:378) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1144) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1414) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:945) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:806) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:404) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
Caused by: java.lang.IllegalArgumentException: AMQ119032: Invalid type: 1
        at org.apache.activemq.artemis.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:455) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:67) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.protocol.ServerPacketDecoder.slowPathDecode(ServerPacketDecoder.java:253) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.protocol.ServerPacketDecoder.decode(ServerPacketDecoder.java:133) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:365) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
        ... 19 more


Broker B logs this once (note, no other process listens to addresses defined in its acceptors:

10:15:17,703 ERROR [org.apache.activemq.artemis.core.server] AMQ224000: Failure in initialisation: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address already in use
        at io.netty.channel.unix.Errors.newIOException(Errors.java:122) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.unix.Socket.bind(Socket.java:287) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.AbstractEpollChannel.doBind(AbstractEpollChannel.java:687) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.EpollServerSocketChannel.doBind(EpollServerSocketChannel.java:70) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:558) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1338) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:501) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:486) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:999) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:254) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:366) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:309) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:886) [netty-all-4.1.22.Final.jar:4.1.22.Final]
        at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]


All clients connecting to brokers log a significant amount of:

08:21:01.992 [Thread-2 (ActiveMQ-client-netty-threads-1829548907)] ERROR o.a.activemq.artemis.core.client - AMQ214013: Failed to decode packet
java.lang.IllegalArgumentException: AMQ119032: Invalid type: 1
        at org.apache.activemq.artemis.core.protocol.core.impl.PacketDecoder.decode(PacketDecoder.java:424)
        at org.apache.activemq.artemis.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:60)
        at org.apache.activemq.artemis.core.protocol.ClientPacketDecoder.decode(ClientPacketDecoder.java:39)
        at org.apache.activemq.artemis.core.protocol.core.impl.RemotingConnectionImpl.bufferReceived(RemotingConnectionImpl.java:349)
        at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl$DelegatingBufferHandler.bufferReceived(ClientSessionFactoryImpl.java:1143)
        at org.apache.activemq.artemis.core.remoting.impl.netty.ActiveMQChannelHandler.channelRead(ActiveMQChannelHandler.java:68)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:350)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:372)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:358)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:129)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:610)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:551)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:465)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:437)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:873)
        at java.lang.Thread.run(Thread.java:748)


And netstat on broker b server lists massive amounts of tcp connections from local port 61616 to broker a host with seemingly random ports in TIME_WAIT status. It seems like something fundamental is broken causing all tcp packets to fail somehow?

Best regards,
- Ilkka



-----Original Message-----
From: Clebert Suconic [mailto:clebert.suconic@gmail.com]
Sent: 8. maaliskuuta 2018 2:08
To: users@activemq.apache.org
Subject: Re: Artemis 2.4.0 - Issues with memory leaks and JMS message redistribution

Can you try master?

On Wed, Mar 7, 2018 at 2:11 AM, Ilkka Virolainen <Ilkka.Virolainen@bitwise.fi> wrote:
> My setup is the same as described in issue #1 of the first message in this thread: NMS clients listening to JMS topics for messages sent by Artemis 1.5.4 clients, both connecting randomly to either of the two brokers in the cluster. Load balancing works at first but eventually this error occurs and the cluster consumer seems to be removed.
>
> I will try to produce a test case reproducing this.
>
> Best regards,
> - Ilkka
>
> -----Original Message-----
> From: Clebert Suconic [mailto:clebert.suconic@gmail.com]
> Sent: 7. maaliskuuta 2018 4:23
> To: users@activemq.apache.org
> Subject: Re: Artemis 2.4.0 - Issues with memory leaks and JMS message 
> redistribution
>
> How do you run into this ?
>
> Are you running amqp messages?
>
> I was going to release 2.5 tomorrow but I wanted to find out first.
>
>
> I am in USA and about to sleep.  If you are in APAC please include as much info as you can for me to replicate this so I can make progress tomorrow.
>
>
> Thanks.
>
> On Tue, Mar 6, 2018 at 8:40 AM Ilkka Virolainen 
> <Ilkka.Virolainen@bitwise.fi>
> wrote:
>
>> While the memory leak is now fixed in master (thanks Justin!), 
>> unfortunately more issues regarding the load balancing have occurred.
>> The messages are redistributed at first but after a few minutes the 
>> broker
>> (2.5.0 current snapshot) logs the following error and load balancing 
>> of topic messages stop. I found an issue that lists similar errors 
>> but it is marked as closed:
>>
>>
>> https://issues.apache.org/jira/browse/ARTEMIS-1345?jql=project%20%3D%
>> 2 0ARTEMIS%20AND%20text%20~%20NoSuchElementException
>>
>> Does this error seem familiar?
>>
>> 15:06:08,000 WARN  [org.apache.activemq.artemis.core.server] AMQ222151:
>> removing consumer which did not handle a message,
>> consumer=ClusterConnectionBridge@6a50dbc1
>> [name=$.artemis.internal.sf.my-cluster.73a86a21-2066-11e8-a8bc-0021f6
>> e
>> 7cd2d,
>> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.73a86a21-2066-1
>> 1
>> e8-a8bc-0021f6e7cd2d,
>> postOffice=PostOfficeImpl
>> [server=ActiveMQServerImpl::serverUUID=ae2a7067-2066-11e8-ba18-005056
>> b
>> c41e7],
>> temp=false]@46ca2a06 targetConnector=ServerLocatorImpl 
>> (identity=(Cluster-connection-bridge::ClusterConnectionBridge@6a50dbc
>> 1
>> [name=$.artemis.internal.sf.my-cluster.73a86a21-2066-11e8-a8bc-0021f6
>> e
>> 7cd2d,
>> queue=QueueImpl[name=$.artemis.internal.sf.my-cluster.73a86a21-2066-1
>> 1
>> e8-a8bc-0021f6e7cd2d,
>> postOffice=PostOfficeImpl
>> [server=ActiveMQServerImpl::serverUUID=ae2a7067-2066-11e8-ba18-005056
>> b
>> c41e7],
>> temp=false]@46ca2a06 targetConnector=ServerLocatorImpl 
>> [initialConnectors=[TransportConfiguration(name=netty-connector,
>> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyCon
>> n
>> ectorFactory)
>> ?port=61616&host=broker-a-host&activemq-passwordcodec=****],
>> discoveryGroupConfiguration=null]]::ClusterConnectionImpl@2143139988[
>> n odeUUID=ae2a7067-2066-11e8-ba18-005056bc41e7,
>> connector=TransportConfiguration(name=netty-connector,
>> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyCon
>> n
>> ectorFactory)
>> ?port=61616&host=broker-b-host&activemq-passwordcodec=****, address=, 
>> server=ActiveMQServerImpl::serverUUID=ae2a7067-2066-11e8-ba18-005056b
>> c
>> 41e7]))
>> [initialConnectors=[TransportConfiguration(name=netty-connector,
>> factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyCon
>> n
>> ectorFactory)
>> ?port=61616&host=broker-a-host&activemq-passwordcodec=****],
>> discoveryGroupConfiguration=null]],
>> message=Reference[19136928]:NON-RELIABLE:CoreMessage[messageID=191369
>> 2 8,durable=false,userID=null,priority=0,
>> timestamp=0,expiration=0, durable=false, 
>> address=ActiveMQ.Advisory.TempQueue,size=1089,properties=TypedPropert
>> i
>> es[__HDR_BROKER_IN_TIME=1520341567999,_AMQ_ROUTING_TYPE=0,__HDR_GROUP
>> _
>> SEQUENCE=0,__HDR_COMMAND_ID=0,__HDR_DATASTRUCTURE=[0000
>> 0062 0800 0000 0000 0178 0100 2449 443A 616C 3232 2D35 3833 3735 2D36  ...
>> 3233 2D38 3031 372D 6330 3564 3235 3339 3365 3364 0100 0000 0000 0000 
>> 0000),_AMQ_DUPL_ID=ID:livisovt49l-28730-1520248604578-1:2:0:0:1212,__
>> H
>> DR_MESSAGE_ID=[0000 004D 6E00 017B 0100 2649 443A 6C69 7669 736F 7674
>> 3439 6C2D 3238 3733  ...
>> 00 0000 0000 0000 0000 0000 0000 0000 0000 0000 0004 BC00 0000 0000
>> 0000
>> 00),__HDR_DROPPABLE=false,__HDR_ARRIVAL=0,__HDR_PRODUCER_ID=[0000
>> 003A
>> 7B01
>> 0026 4944 3A6C 6976 6973 6F76 7434 396C 2D32 3837 3330 2D31  ...  
>> 3032
>> 3438
>> 3630 3435 3738 2D31 3A32 0000 0000 0000 0000 0000 0000 0000 
>> 0000),JMSType=Advisory,_AMQ_ROUTE_TO$.artemis.internal.sf.my-cluster.
>> 7
>> 3a86a21-2066-11e8-a8bc-0021f6e7cd2d=[0000
>> 0000 009B AA9C 0000 0000 009B
>> AA9E),bytesAsLongs(10201756,10201758]]]@709863295:
>> java.util.ConcurrentModificationException
>>         at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
>> [rt.jar:1.8.0_152]
>>         at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
>> [rt.jar:1.8.0_152]
>>         at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
>> [rt.jar:1.8.0_152]
>>         at java.util.HashMap.putMapEntries(HashMap.java:512)
>> [rt.jar:1.8.0_152]
>>         at java.util.HashMap.<init>(HashMap.java:490) [rt.jar:1.8.0_152]
>>         at
>> org.apache.activemq.artemis.utils.collections.TypedProperties.<init>(
>> T
>> ypedProperties.java:83)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.message.impl.CoreMessage.<init>(Core
>> M
>> essage.java:347)
>> [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.message.impl.CoreMessage.<init>(Core
>> M
>> essage.java:321)
>> [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.message.impl.CoreMessage.copy(CoreMe
>> s
>> sage.java:374) [artemis-core-client-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.cluster.impl.ClusterConnectio
>> n
>> Bridge.beforeForward(ClusterConnectionBridge.java:168)
>> [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.cluster.impl.BridgeImpl.handl
>> e
>> (BridgeImpl.java:581)
>> [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl.handle(QueueIm
>> p
>> l.java:2939) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueI
>> m
>> pl.java:2309) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2000(Qu
>> e
>> ueImpl.java:105) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.
>> r
>> un(QueueImpl.java:3165)
>> [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(Order
>> e
>> dExecutor.java:42) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(Order
>> e
>> dExecutor.java:31) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePending
>> T
>> asks(ProcessorBase.java:66)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>> j
>> ava:1149)
>> [rt.jar:1.8.0_152]
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
>> java:624)
>> [rt.jar:1.8.0_152]
>>         at
>> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveM
>> Q
>> ThreadFactory.java:118)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>
>> 15:06:08,002 ERROR [org.apache.activemq.artemis.core.server] AMQ224041:
>> Failed to deliver: java.util.NoSuchElementException
>>         at
>> org.apache.activemq.artemis.utils.collections.PriorityLinkedListImpl$
>> P
>> riorityLinkedListIterator.repeat(PriorityLinkedListImpl.java:161)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueI
>> m
>> pl.java:2327) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2000(Qu
>> e
>> ueImpl.java:105) [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.
>> r
>> un(QueueImpl.java:3165)
>> [artemis-server-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(Order
>> e
>> dExecutor.java:42) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(Order
>> e
>> dExecutor.java:31) [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> org.apache.activemq.artemis.utils.actors.ProcessorBase.executePending
>> T
>> asks(ProcessorBase.java:66)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>         at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
>> j
>> ava:1149)
>> [rt.jar:1.8.0_152]
>>         at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.
>> java:624)
>> [rt.jar:1.8.0_152]
>>         at
>> org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveM
>> Q
>> ThreadFactory.java:118)
>> [artemis-commons-2.5.0-SNAPSHOT.jar:2.5.0-SNAPSHOT]
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Ilkka Virolainen [mailto:Ilkka.Virolainen@bitwise.fi]
>> Sent: 1. maaliskuuta 2018 16:59
>> To: users@activemq.apache.org
>> Subject: RE: Artemis 2.4.0 - Issues with memory leaks and JMS message 
>> redistribution
>>
>> An update on this: I have replicated the memory and expiration issues 
>> on current 2.5.0-SNAPSHOT with included client libraries and a one 
>> node broker by modifying an existing artemis example. As messages are 
>> routed to DLQ, paged and expired, memory consumption keeps increasing 
>> and eventually leads to heap space exhaustion rendering the broker unable to route messages.
>> What should happen is that the memory consumption should stay 
>> reasonable even without expiration due to paging to disk but doubly 
>> so because having expired, the messages shouldn't consume any resources.
>>
>> I'm not certain if the two issues (erroneous statistics on expiration 
>> and the memory leak) are connected but they both do appear at the 
>> same time raising suspicion. A possible cause could be that filtered 
>> message expiration behaves differently than some other means of
>> expiration: it uses a private expiration method that takes a 
>> transaction as a parameter. Unlike the nontransacted expiration 
>> method, it checks for empty bindings separately but doesn't seem to 
>> decrement counters appropriately in this case. Even though I have set 
>> a null expiry-address (<expiry-address />) it is seen as nonnull in 
>> expiration. Then as the expiry address is not null but bindings are 
>> not found, the warning about dropping the message is logged. However, 
>> it seems that the message is never acknowledged and the deliveringCount is never decreased so delivery metrics end up being wrong.
>>
>> Shouldn't there be an acknowledgment of the message reference 
>> following the logging when the following condition is matched?
>>
>> https://github.com/apache/activemq-artemis/blob/master/artemis-server
>> /
>> src/main/java/org/apache/activemq/artemis/core/server/impl/QueueImpl.
>> j
>> ava#L2735
>>
>> Also, why is the acknowledgment reason here not expiry but normal? 
>> One would imagine it should be acknowledge(tx, ref,
>> AckReason.EXPIRED) instead of the default overload so that the 
>> appropriate counters end up being
>> incremented:
>>
>> https://github.com/apache/activemq-artemis/blob/master/artemis-server
>> /
>> src/main/java/org/apache/activemq/artemis/core/server/impl/QueueImpl.
>> j
>> ava#L2747
>>
>> Best regards,
>> - Ilkka
>>
>> -----Original Message-----
>> From: Ilkka Virolainen [mailto:Ilkka.Virolainen@bitwise.fi]
>> Sent: 27. helmikuuta 2018 15:20
>> To: users@activemq.apache.org
>> Subject: RE: Artemis 2.4.0 - Issues with memory leaks and JMS message 
>> redistribution
>>
>> Hello,
>>
>> - I don't have consumers on the DLQ and neither are any listed in its 
>> JMX attributes
>> - The messages are being sent to the DLQ by the broker after a 
>> delivery failure on another queue. The delivery failure is expected 
>> and caused by a transactional rollback on the consumer.
>> - I am setting the expiry delay on the broker's DLQ address-settings 
>> (not in message attributes). I'm setting an empty expiry-address in 
>> the same place.
>> - I have a set of broker settings and a small springboot application 
>> with which I was able to replicate the issue. Would you like me to 
>> provide it for you somehow?
>>
>> It seems like there's a some kind of hiccup in message expiration.
>> When the messages routed to the DLQ start expiring, the broker logs:
>>
>> AMQ222146: Message has expired. No bindings for Expiry Address  so 
>> dropping it
>>
>> but when reviewing the DLQ statistics via JMX the ExpiredMessages 
>> counter is not incremented, but the DeliveringCount is. As messages 
>> keep expiring the deliverincount keeps increasing. This feels a lot 
>> like the issue I've been having. Could it be that this process leaks 
>> memory/resources or is it just that the expiration statistics always 
>> assume that expiration results in redelivery thereby causing erroneous numbers to be reported?
>>
>> Best regards,
>> - Ilkka
>>
>>
>> -----Original Message-----
>> From: Justin Bertram [mailto:jbertram@apache.org]
>> Sent: 23. helmikuuta 2018 16:51
>> To: users@activemq.apache.org
>> Subject: Re: Artemis 2.4.0 - Issues with memory leaks and JMS message 
>> redistribution
>>
>> Couple of questions:
>>
>>  - Do you have any consumers on the DLQ?
>>  - Are messages being sent to the DLQ by the broker automatically (e.g.
>> based on delivery attempt failures) or is that being done by your 
>> application?
>>  - How are you setting the expiry delay?
>>  - Do you have a reproducible test-case?
>>
>>
>> Justin
>>
>> On Fri, Feb 23, 2018 at 4:38 AM, Ilkka Virolainen < 
>> Ilkka.Virolainen@bitwise.fi> wrote:
>>
>> > I'm still facing an issue with somewhat confusing behavior 
>> > regarding message expiration in the DLQ, maybe related to the 
>> > memory issues I've been having. My aim is to have messages routed 
>> > to DLQ expire and dropped in one hour. To achieve this, I've set an 
>> > empty expiry-address and the appropriate expiry-delay. The problem 
>> > is, most of the messages routed to DLQ end up in an in-delivery 
>> > state - they are not expiring and I cannot remove them via JMX.
>> > Messagecount in the DLQ is slightly higher than the deliveringcount 
>> > and attempting to remove all messages only removes a number of 
>> > messages that is equal to the difference between deliveringcount 
>> > and messagecount which is approximately a few thousand messages 
>> > while the messagecount is tens of thousands and
>> increasing as message delivery failures occur.
>> >
>> > What could be the reason for this behavior and how could it be avoided?
>> >
>> > -----Original Message-----
>> > From: Ilkka Virolainen [mailto:Ilkka.Virolainen@bitwise.fi]
>> > Sent: 22. helmikuuta 2018 13:38
>> > To: users@activemq.apache.org
>> > Subject: RE: Artemis 2.4.0 - Issues with memory leaks and JMS 
>> > message redistribution
>> >
>> > To answer my own question in case anyone else is wondering about a 
>> > similar issue, turns out the change in addressing is referred in 
>> > ticket [1] and adding the multicastPrefix and anycastPrefix 
>> > described in the ticket to my broker acceptors seems to have fixed my problem.
>> > If the issue regarding memory leaks persists I will try to provide 
>> > a
>> reproducible test case.
>> >
>> > Thank you for your help, Justin.
>> >
>> > Best regards,
>> > - Ilkka
>> >
>> > [1] https://issues.apache.org/jira/browse/ARTEMIS-1644
>> >
>> >
>> > -----Original Message-----
>> > From: Ilkka Virolainen [mailto:Ilkka.Virolainen@bitwise.fi]
>> > Sent: 22. helmikuuta 2018 12:33
>> > To: users@activemq.apache.org
>> > Subject: RE: Artemis 2.4.0 - Issues with memory leaks and JMS 
>> > message redistribution
>> >
>> > Having removed the address configuration and having switched from
>> > 2.4.0 to yesterday's snapshot of 2.5.0 it seems like the 
>> > redistribution of messages is now working, but there also seems to 
>> > have been a change in addressing between the versions causing 
>> > another problem related to jms.queue / jms.topic prefixing. While 
>> > the NMS clients listen and artemis jms clients send to the same 
>> > topics as described in the previous message, Artemis 2.5.0 prefixes 
>> > the addresses with jms.topic. While the messages are being sent to e.g.
>> > A.B.f64dd592-a8fb-442e-826d-927834d566f4.C.D they are only received 
>> > if I explicitly prefix the listening address with jms.topic, for 
>> > example topic://jms.topic.A.B.*.C.D. Can this somehow be avoided in 
>> > the broker
>> configuration?
>> >
>> > Best regards
>> >
>> > -----Original Message-----
>> > From: Justin Bertram [mailto:jbertram@apache.org]
>> > Sent: 21. helmikuuta 2018 15:19
>> > To: users@activemq.apache.org
>> > Subject: Re: Artemis 2.4.0 - Issues with memory leaks and JMS 
>> > message redistribution
>> >
>> > Your first issue is probably a misconfiguration.  Your 
>> > cluster-connection is using an "address" value of '*' which I 
>> > assume is supposed to mean "all addresses," but the "address"
>> > element doesn't
>> support wildcards like this.
>> > Just leave it empty to match all addresses.  See the documentation 
>> > [1] for more details.
>> >
>> > Even after you fix that configuration issue you may run into issues.
>> > These may be fixed already via ARTEMIS-1523 and/or ARTEMIS-1680.  
>> > If you have a reproducible test-case then you can verify using the 
>> > head of the master branch.
>> >
>> > For the memory issue it would be helpful to have some heap dumps or 
>> > something to actually see what's actually consuming the memory.
>> > Better yet would be a reproducible test-case.  Do you have either?
>> >
>> >
>> > Justin
>> >
>> > [1] https://activemq.apache.org/artemis/docs/latest/clusters.html
>> >
>> >
>> >
>> > On Wed, Feb 21, 2018 at 5:39 AM, Ilkka Virolainen < 
>> > Ilkka.Virolainen@bitwise.fi> wrote:
>> >
>> > > Hello,
>> > >
>> > > I am using Artemis 2.4.0 to broker messages through JMS 
>> > > queues/topics between a set of clients. Some are Apache NMS 1.7.2 
>> > > ActiveMQ clients and others are using Artemis JMS client 1.5.4 
>> > > included in Spring Boot
>> > 1.5.3.
>> > > Broker topology is a symmetric cluster of two live nodes with 
>> > > static connectors, both nodes having been setup as replicating 
>> > > colocated backup pairs with scale down. I have two quite 
>> > > frustrating issues at the
>> > moment:
>> > > message redistribution not working correctly and a memory leak 
>> > > causing eventual thread death.
>> > >
>> > > ISSUE #1 - Message redistribution / load balancing not working:
>> > >
>> > > Client 1 (NMS) connects to broker a and starts listening, artemis 
>> > > creates the following address:
>> > >
>> > > (Broker a):
>> > > A.B.*.C.D
>> > > |-queues
>> > > |-multicast
>> > >   |-f64dd592-a8fb-442e-826d-927834d566f4
>> > >
>> > > Server 1 (artemis-jms-client) connects to broker b and sends a 
>> > > message to
>> > > topic: A.B.f64dd592-a8fb-442e-826d-927834d566f4.C.D - this should 
>> > > be routed to broker a since the corresponding queue has no 
>> > > consumers on broker b (the queue does not exist). This however 
>> > > does not happen and the client receives no messages. Broker b has 
>> > > some other clients connected, causing similar (but not the same) 
>> > > queues having been
>> created:
>> > >
>> > > (Broker b):
>> > > A.B.*.C.D
>> > > |-queues
>> > > |-multicast
>> > >   |-1eb48079-7fd8-40e9-b822-bcc25695ced0
>> > >   |-9f295257-c352-4ae6-b74b-d5994f330485
>> > >
>> > >
>> > > ISSUE #2: - Memory leak and eventual thread death
>> > >
>> > > Artemis broker has 4GB allocated heap space and global-max-size 
>> > > is set up as half of that (being the default setting).
>> > > Address-full-policy is set to PAGE for all addresses and some 
>> > > individual addresses have small max-size-bytes values set e.g.
>> > > 104857600. As far as I know the paging settings should limit 
>> > > memory usage but what happens is that at times Artemis uses the 
>> > > whole heap space, encounters an out of memory error and
>> > > dies:
>> > >
>> > > 05:39:29,510 WARN  [org.eclipse.jetty.util.thread.QueuedThreadPool] :
>> > > java.lang.OutOfMemoryError: Java heap space
>> > > 05:39:16,646 WARN  [io.netty.channel.ChannelInitializer] Failed 
>> > > to initialize a channel. Closing: [id: ...]: java.lang.OutOfMemoryError:
>> > > Java heap space
>> > > 05:41:05,597 WARN
>> > > [org.eclipse.jetty.util.thread.QueuedThreadPool]
>> > > Unexpected thread death: org.eclipse.jetty.util.thread.
>> > > QueuedThreadPool$2@5ffaba31 in
>> > > qtp20111564{STARTED,8<=8<=200,i=2,q=0}
>> > >
>> > > Are these known issues in Artemis or misconfigurations in the brokers?
>> > >
>> > > The broker configurations are as follows. Broker b has an 
>> > > identical configuration excluding that the cluster connector's 
>> > > connector-ref and static-connector connector-ref refer to broker 
>> > > b and broker a
>> > respectively.
>> > >
>> > > Best regards,
>> > >
>> > > broker.xml (broker a):
>> > >
>> > > <?xml version='1.0'?>
>> > > <configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/ 
>> > > 2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq 
>> > > /schema/artemis-configuration.xsd">
>> > >     <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/ 
>> > > 2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
>> > >         <name>[broker-a-ip]</name>
>> > >         <persistence-enabled>true</persistence-enabled>
>> > >
>> > >         <journal-type>NIO</journal-type>
>> > >
>> > >         <paging-directory>...</paging-directory>
>> > >         <bindings-directory>...</bindings-directory>
>> > >         <journal-directory>...</journal-directory>
>> > >         <large-messages-directory>...</large-messages-directory>
>> > >
>> > >         <journal-datasync>true</journal-datasync>
>> > >         <journal-min-files>2</journal-min-files>
>> > >         <journal-pool-files>-1</journal-pool-files>
>> > >         <journal-buffer-timeout>788000</journal-buffer-timeout>
>> > >         <disk-scan-period>5000</disk-scan-period>
>> > >
>> > >         <max-disk-usage>97</max-disk-usage>
>> > >
>> > >         <critical-analyzer>true</critical-analyzer>
>> > >         <critical-analyzer-timeout>120000</critical-analyzer-timeout>
>> > >         <critical-analyzer-check-period>60000</critical-
>> > > analyzer-check-period>
>> > >         <critical-analyzer-policy>HALT</critical-analyzer-policy>
>> > >
>> > >         <acceptors>
>> > >             <acceptor name="invm-acceptor">vm://0</acceptor>
>> > >             <acceptor name="artemis">tcp://0.0.0.0:61616</acceptor>
>> > >             <acceptor
>> > > name="ssl">tcp://0.0.0.0:61617?sslEnabled=true;
>> > > keyStorePath=...;keyStorePassword=...</acceptor>
>> > >         </acceptors>
>> > >         <connectors>
>> > >             <connector name="invm-connector">vm://0</connector>
>> > >             <connector name="netty-connector">tcp://[ 
>> > > broker-a-ip]:61616</connector>
>> > >             <connector name="broker-b-connector">[ 
>> > > broker-b-ip]:61616</connector>
>> > >         </connectors>
>> > >
>> > >         <cluster-connections>
>> > >             <cluster-connection name="cluster-name">
>> > >                 <address>*</address>
>> > >                 <connector-ref>netty-connector</connector-ref>
>> > >                 <retry-interval>500</retry-interval>
>> > >                 <reconnect-attempts>5</reconnect-attempts>
>> > >                 <use-duplicate-detection>true</use-duplicate-detection>
>> > >                 <message-load-balancing>ON_DEMAND</message-load-
>> > balancing>
>> > >                 <max-hops>1</max-hops>
>> > >                 <static-connectors>
>> > >                     <connector-ref>broker-b-connector</connector-ref>
>> > >                 </static-connectors>
>> > >             </cluster-connection>
>> > >         </cluster-connections>
>> > >
>> > >         <ha-policy>
>> > >             <replication>
>> > >                 <colocated>
>> > >
>> > > <backup-request-retry-interval>5000</backup-request-
>> > > retry-interval>
>> > >                     <max-backups>3</max-backups>
>> > >                     <request-backup>true</request-backup>
>> > >                     <backup-port-offset>100</backup-port-offset>
>> > >                     <excludes>
>> > >                         <connector-ref>invm-connector</connector-ref>
>> > >                         <connector-ref>netty-connector</connector-ref>
>> > >                     </excludes>
>> > >                     <master>
>> > >                         <check-for-live-server>true</
>> > > check-for-live-server>
>> > >                     </master>
>> > >                     <slave>
>> > >                         <restart-backup>false</restart-backup>
>> > >                         <scale-down />
>> > >                     </slave>
>> > >                 </colocated>
>> > >             </replication>
>> > >         </ha-policy>
>> > >
>> > >         <cluster-user>ARTEMIS.CLUSTER.ADMIN.USER</cluster-user>
>> > >         <cluster-password>[the shared cluster 
>> > > password]</cluster-password>
>> > >
>> > >         <security-settings>
>> > >             <security-setting match="#">
>> > >                 <permission type="createDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="deleteDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="createNonDurableQueue"
>> > > roles="amq, other-role"  />
>> > >                 <permission type="createAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="deleteNonDurableQueue"
>> > > roles="amq, other-role" />
>> > >                 <permission type="deleteAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="consume" roles="amq, other-role" />
>> > >                 <permission type="browse" roles="amq, other-role" />
>> > >                 <permission type="send" roles="amq, other-role" />
>> > >                 <permission type="manage" roles="amq" />
>> > >             </security-setting>
>> > >             <security-setting match="A.some.queue">
>> > >                 <permission type="createNonDurableQueue"
>> > > roles="amq, other-role" />
>> > >                 <permission type="deleteNonDurableQueue"
>> > > roles="amq, other-role" />
>> > >                 <permission type="createDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="deleteDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="createAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="deleteAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="consume" roles="amq, other-role" />
>> > >                 <permission type="browse" roles="amq, other-role" />
>> > >                 <permission type="send" roles="amq, other-role" />
>> > >             </security-setting>
>> > >                 <security-setting match="A.some.other.queue">
>> > >                 <permission type="createNonDurableQueue"
>> > > roles="amq, other-role" />
>> > >                 <permission type="deleteNonDurableQueue"
>> > > roles="amq, other-role" />
>> > >                 <permission type="createDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="deleteDurableQueue" roles="amq, 
>> > > other-role" />
>> > >                 <permission type="createAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="deleteAddress" roles="amq,
>> other-role"
>> > />
>> > >                 <permission type="consume" roles="amq, other-role" />
>> > >                 <permission type="browse" roles="amq, other-role" />
>> > >                 <permission type="send" roles="amq, other-role" />
>> > >             </security-setting>
>> > >             ...
>> > >             ... etc.
>> > >             ...
>> > >         </security-settings>
>> > >
>> > >         <address-settings>
>> > >             <address-setting match="activemq.management#">
>> > >                 <dead-letter-address>DLQ</dead-letter-address>
>> > >                 <expiry-address>ExpiryQueue</expiry-address>
>> > >                 <redelivery-delay>0</redelivery-delay>
>> > >                 <max-size-bytes>-1</max-size-bytes>
>> > >
>> > > <message-counter-history-day-limit>10</message-counter-
>> > > history-day-limit>
>> > >                 <address-full-policy>PAGE</address-full-policy>
>> > >             </address-setting>
>> > >             <!--default for catch all -->
>> > >             <address-setting match="#">
>> > >                 <dead-letter-address>DLQ</dead-letter-address>
>> > >                 <expiry-address>ExpiryQueue</expiry-address>
>> > >                 <redelivery-delay>0</redelivery-delay>
>> > >                 <max-size-bytes>-1</max-size-bytes>
>> > >
>> > > <message-counter-history-day-limit>10</message-counter-
>> > > history-day-limit>
>> > >                 <address-full-policy>PAGE</address-full-policy>
>> > >                 <redistribution-delay>1000</redistribution-delay>
>> > >             </address-setting>
>> > >             <address-setting match="DLQ">
>> > >                 <!-- 100 * 1024 * 1024 -> 100MB -->
>> > >                 <max-size-bytes>104857600</max-size-bytes>
>> > >                 <!-- 1000 * 60 * 60 -> 1h -->
>> > >                 <expiry-delay>3600000</expiry-delay>
>> > >                 <expiry-address />
>> > >             </address-setting>
>> > >             <address-setting match="A.some.queue">
>> > >
>> > > <redelivery-delay-multiplier>1.0</redelivery-delay-
>> > > multiplier>
>> > >                 <redelivery-delay>0</redelivery-delay>
>> > >                 <max-redelivery-delay>10</max-redelivery-delay>
>> > >             </address-setting>
>> > >                 <address-setting match="A.some.other.queue">
>> > >
>> > > <redelivery-delay-multiplier>1.0</redelivery-delay-
>> > > multiplier>
>> > >                 <redelivery-delay>0</redelivery-delay>
>> > >                 <max-redelivery-delay>10</max-redelivery-delay>
>> > >                 <max-delivery-attempts>1</max-delivery-attempts>
>> > >                 <max-size-bytes>104857600</max-size-bytes>
>> > >             </address-setting>
>> > >             ...
>> > >             ... etc.
>> > >             ...
>> > >         </address-settings>
>> > >
>> > >         <addresses>
>> > >             <address name="DLQ">
>> > >                 <anycast>
>> > >                     <queue name="DLQ" />
>> > >                 </anycast>
>> > >             </address>
>> > >             <address name="ExpiryQueue">
>> > >                 <anycast>
>> > >                     <queue name="ExpiryQueue" />
>> > >                 </anycast>
>> > >             </address>
>> > >             <address name="A.some.queue">
>> > >                 <anycast>
>> > >                     <queue name="A.some.queue">
>> > >                         <durable>true</durable>
>> > >                     </queue>
>> > >                 </anycast>
>> > >             </address>
>> > >             <address name="A.some.other.queue">
>> > >                 <anycast>
>> > >                     <queue name="A.some.other.queue">
>> > >                         <durable>true</durable>
>> > >                     </queue>
>> > >                 </anycast>
>> > >             </address>
>> > >             ...
>> > >             ... etc.
>> > >             ...
>> > >         </addresses>
>> > >     </core>
>> > > </configuration>
>> > >
>> >
>>
> --
> Clebert Suconic



--
Clebert Suconic
Mime
View raw message