activemq-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clebert Suconic <clebert.suco...@gmail.com>
Subject Re: artemis 2.2.0 logging disaster
Date Fri, 02 Jun 2017 19:27:06 GMT
There was a recent bug fix on MQTT. are you using a snapshot after the fix?

On Fri, Jun 2, 2017 at 12:59 PM, Michael André Pearce
<michael.andre.pearce@me.com> wrote:
> Also sorry just one other question.
>
> Does this occur with 2.1.0?
>
> Sent from my iPhone
>
>> On 2 Jun 2017, at 17:57, Michael André Pearce <michael.andre.pearce@me.com>
wrote:
>>
>> Essentially just from this log output I assume the server your broker is running
out of ram to use.
>> This can be either
>> A) genuine memory leak in artemis
>> B) you simply don't have enough ram for the load/throughout.
>>
>> Some questions:
>>
>> Is the load constant?
>> Do you have server ram usage metrics available?
>>
>> You should ensure there is more ram available to the broker instance than just heap
allocation, for network buffers etc.
>>
>> Cheers
>> Mike
>>
>>
>>
>> Sent from my iPhone
>>
>>> On 2 Jun 2017, at 09:44, Helge Waastad <helge@waastad.org> wrote:
>>>
>>> Hi,
>>> I'm running artemis 2.2.0 as a docker container.
>>>
>>> I'm collecting MQTT messages an these are consumed by a JMS consumer
>>> (artemis-jms-client)
>>>
>>> It's running fine for a while, but suddenly this appear (docker *-
>>> json.log):
>>>
>>> {"log":"19:16:12,338 WARN
>>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
>>> Trying to allocate 712 bytes, System is throwing OutOfMemoryError on
>>> NettyConnection org.apache.activemq.art
>>> emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local=
>>> /10.42.154.105:61616, remote=/10.42.21.198:40844], there are currently
>>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail
>>> ed to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max:
>>> 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to
>>> allocate 16777216 byte(s) of direct memory (used: 1057466368, m
>>> ax:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:16:12.342853929Z"}
>>> {"log":"19:16:12,342 WARN  [org.apache.activemq.artemis.core.server]
>>> AMQ222151: removing consumer which did not handle a message,
>>> consumer=ServerConsumerImpl [id=0, filter=null,
>>> binding=LocalQueueBinding [a
>>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
>>> postOffice=PostOfficeImpl
>>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>>> 02a837635b7b],
>>> temp=false]@3e389a2d, filter=null, name=Cent
>>> reonTopic,
>>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
>>> message=Reference[715739]:NON-
>>> RELIABLE:CoreMessage[messageID=715739,durable=false,userID=null,priorit
>>> y=0,
>>> timestamp=0,expiration=0
>>> , durable=false,
>>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>>> lse,mqtt.qos.level=0]]@1623021181:
>>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
>>> 16777216 byte(s)
>>> of direct memory (used: 1057466368, max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:16:12.347107296Z"}
>>> {"log":"19:31:54,236 WARN
>>> [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]
>>> Trying to allocate 548 bytes, System is throwing OutOfMemoryError on
>>> NettyConnection org.apache.activemq.art
>>> emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local=
>>> /10.42.154.105:61616, remote=/10.42.162.183:48376], there are
>>> currently
>>> pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai
>>> led to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError:
>>> failed
>>> to allocate 16777216 byte(s) of direct memory (used: 1057466368,
>>> max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:31:54.238904544Z"}
>>> {"log":"19:31:54,238 WARN  [org.apache.activemq.artemis.core.server]
>>> AMQ222151: removing consumer which did not handle a message,
>>> consumer=ServerConsumerImpl [id=0, filter=null,
>>> binding=LocalQueueBinding [a
>>> ddress=CentreonTopic, queue=QueueImpl[name=CentreonTopic,
>>> postOffice=PostOfficeImpl
>>> [server=ActiveMQServerImpl::serverUUID=772ad6f8-4630-11e7-93cd-
>>> 02a837635b7b],
>>> temp=false]@3e389a2d, filter=null, name=Cent
>>> reonTopic,
>>> clusterName=CentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],
>>> message=Reference[722892]:NON-
>>> RELIABLE:CoreMessage[messageID=722892,durable=false,userID=null,priorit
>>> y=0,
>>> timestamp=0,expiration=0
>>> , durable=false,
>>> address=CentreonTopic,properties=TypedProperties[mqtt.message.retain=fa
>>> lse,mqtt.qos.level=0]]@1252621657:
>>> io.netty.util.internal.OutOfDirectMemoryError: failed to allocate
>>> 16777216 byte(s)
>>> of direct memory (used: 1057466368, max:
>>> 1073741824)\r\n","stream":"stdout","time":"2017-06-
>>> 01T19:31:54.239955162Z"}
>>>
>>>
>>>
>>> Then after a couple of hours:
>>>
>>> {"log":"23:22:24,013 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.015087347Z"}
>>> {"log":"23:22:24,014 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.015759902Z"}
>>> {"log":"23:22:24,015 WARN  [io.netty.channel.DefaultChannelPipeline]
>>> An
>>> exceptionCaught() event was fired, and it reached at the tail of the
>>> pipeline. It usually means the last handler in the pipeline did n
>>> ot handle the exception.:
>>> io.netty.util.internal.OutOfDirectMemoryError:
>>> failed to allocate 16777216 byte(s) of direct memory (used:
>>> 1057466368,
>>> max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23
>>> :22:24.016623101Z"}
>>>
>>>
>>> And this message is looping and in 5 mins it's filled my 12GB drive.
>>>
>>> Any clues what to do? I'll do some more debugging.
>>>
>>> /hw



-- 
Clebert Suconic

Mime
View raw message