Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 4CF2A200C88 for ; Fri, 2 Jun 2017 18:57:39 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 4A38D160BDD; Fri, 2 Jun 2017 16:57:39 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 68BCC160BBA for ; Fri, 2 Jun 2017 18:57:38 +0200 (CEST) Received: (qmail 38626 invoked by uid 500); 2 Jun 2017 16:57:32 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 38613 invoked by uid 99); 2 Jun 2017 16:57:30 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 02 Jun 2017 16:57:30 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 5CF40CD52A for ; Fri, 2 Jun 2017 16:57:30 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -5.198 X-Spam-Level: X-Spam-Status: No, score=-5.198 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RCVD_IN_DNSWL_MED=-2.3, RCVD_IN_MSPIKE_H2=-2.796, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=me.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 8wRvnQAiAC-2 for ; Fri, 2 Jun 2017 16:57:29 +0000 (UTC) Received: from pv33p33im-asmtp002.me.com (pv33p33im-asmtp002.me.com [17.142.241.11]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id 396E45F242 for ; Fri, 2 Jun 2017 16:57:29 +0000 (UTC) Received: from process-dkim-sign-daemon.pv33p33im-asmtp002.me.com by pv33p33im-asmtp002.me.com (Oracle Communications Messaging Server 7.0.5.38.0 64bit (built Feb 26 2016)) id <0OQX00O00ITUBO00@pv33p33im-asmtp002.me.com> for users@activemq.apache.org; Fri, 02 Jun 2017 16:57:28 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=me.com; s=4d515a; t=1496422648; bh=SIDFUQDkorSRnO+xehHCXCylnKen+vt6xOZ/ALGxZeA=; h=From:Content-type:MIME-version:Date:Subject:Message-id:To; b=iWOLvkXzCf7w0pJozwcQ16dYkIQG1BRvREU6AkGC9u/yV3YUKE+/1Q6b+Xmf0Ax7e /UT37nmGp21OirX5IwQw5EzVdsqz1bL425kriz4JAeOAIk+RLyS49YHyMfJTCnRb+N N8fS78du7IL/FLkmuz6lJ8fw5UkyLCB3Q9E6+L2qvIhOmNaSkgTDRVKmY+pz35vlS1 Ik2cQ6M7BWP82hZ2qdEQkBNOMNMnRVrXpPPt5Y/eopU5KLfEkbo/kKRIyPkYa0N1s4 xMaVWtJ9QJlDp7Ic7ztHS2GUKiXyGlg7/XnwnktJ2VaZqwfktn7Tt1xu6w4EMbshgL 7QdJHi3W/CFsA== Received: from icloud.com ([127.0.0.1]) by pv33p33im-asmtp002.me.com (Oracle Communications Messaging Server 7.0.5.38.0 64bit (built Feb 26 2016)) with ESMTPSA id <0OQX00INQJ3OE200@pv33p33im-asmtp002.me.com> for users@activemq.apache.org; Fri, 02 Jun 2017 16:57:27 +0000 (GMT) X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-06-02_09:,, signatures=0 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 clxscore=1034 suspectscore=1 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1701120000 definitions=main-1706020303 From: =?utf-8?Q?Michael_Andr=C3=A9_Pearce?= Content-type: text/plain; charset=us-ascii Content-transfer-encoding: quoted-printable MIME-version: 1.0 (1.0) Date: Fri, 02 Jun 2017 17:57:23 +0100 Subject: Re: artemis 2.2.0 logging disaster Message-id: References: <1496393053.6455.18.camel@waastad.org> In-reply-to: <1496393053.6455.18.camel@waastad.org> To: users@activemq.apache.org X-Mailer: iPhone Mail (14F89) archived-at: Fri, 02 Jun 2017 16:57:39 -0000 Essentially just from this log output I assume the server your broker is run= ning out of ram to use.=20 This can be either=20 A) genuine memory leak in artemis B) you simply don't have enough ram for the load/throughout. Some questions: Is the load constant?=20 Do you have server ram usage metrics available? You should ensure there is more ram available to the broker instance than ju= st heap allocation, for network buffers etc. Cheers Mike Sent from my iPhone > On 2 Jun 2017, at 09:44, Helge Waastad wrote: >=20 > Hi, > I'm running artemis 2.2.0 as a docker container. >=20 > I'm collecting MQTT messages an these are consumed by a JMS consumer > (artemis-jms-client) >=20 > It's running fine for a while, but suddenly this appear (docker *- > json.log): >=20 > {"log":"19:16:12,338 WARN=20 > [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]=20 > Trying to allocate 712 bytes, System is throwing OutOfMemoryError on=20 > NettyConnection org.apache.activemq.art > emis.core.remoting.impl.netty.NettyServerConnection@6f035b0a[local=3D=20 > /10.42.154.105:61616, remote=3D/10.42.21.198:40844], there are currently=20= > pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fail > ed to allocate 16777216 byte(s) of direct memory (used: 1057466368, > max:=20 > 1073741824): io.netty.util.internal.OutOfDirectMemoryError: failed to=20 > allocate 16777216 byte(s) of direct memory (used: 1057466368, m > ax:=20 > 1073741824)\r\n","stream":"stdout","time":"2017-06- > 01T19:16:12.342853929Z"} > {"log":"19:16:12,342 WARN [org.apache.activemq.artemis.core.server]=20 > AMQ222151: removing consumer which did not handle a message,=20 > consumer=3DServerConsumerImpl [id=3D0, filter=3Dnull, > binding=3DLocalQueueBinding [a > ddress=3DCentreonTopic, queue=3DQueueImpl[name=3DCentreonTopic,=20 > postOffice=3DPostOfficeImpl=20 > [server=3DActiveMQServerImpl::serverUUID=3D772ad6f8-4630-11e7-93cd- > 02a837635b7b],=20 > temp=3Dfalse]@3e389a2d, filter=3Dnull, name=3DCent > reonTopic,=20 > clusterName=3DCentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],=20 > message=3DReference[715739]:NON- > RELIABLE:CoreMessage[messageID=3D715739,durable=3Dfalse,userID=3Dnull,prio= rit > y=3D0,=20 > timestamp=3D0,expiration=3D0 > , durable=3Dfalse,=20 > address=3DCentreonTopic,properties=3DTypedProperties[mqtt.message.retain=3D= fa > lse,mqtt.qos.level=3D0]]@1623021181:=20 > io.netty.util.internal.OutOfDirectMemoryError: failed to allocate=20 > 16777216 byte(s) > of direct memory (used: 1057466368, max:=20 > 1073741824)\r\n","stream":"stdout","time":"2017-06- > 01T19:16:12.347107296Z"} > {"log":"19:31:54,236 WARN=20 > [org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnection]=20 > Trying to allocate 548 bytes, System is throwing OutOfMemoryError on=20 > NettyConnection org.apache.activemq.art > emis.core.remoting.impl.netty.NettyServerConnection@7b18e1a6[local=3D=20 > /10.42.154.105:61616, remote=3D/10.42.162.183:48376], there are > currently=20 > pendingWrites: [NETTY] -\u003e 0[EVENT LOOP] -\u003e 0 causes: fai > led to allocate 16777216 byte(s) of direct memory (used: 1057466368,=20 > max: 1073741824): io.netty.util.internal.OutOfDirectMemoryError: > failed=20 > to allocate 16777216 byte(s) of direct memory (used: 1057466368, > max:=20 > 1073741824)\r\n","stream":"stdout","time":"2017-06- > 01T19:31:54.238904544Z"} > {"log":"19:31:54,238 WARN [org.apache.activemq.artemis.core.server]=20 > AMQ222151: removing consumer which did not handle a message,=20 > consumer=3DServerConsumerImpl [id=3D0, filter=3Dnull, > binding=3DLocalQueueBinding [a > ddress=3DCentreonTopic, queue=3DQueueImpl[name=3DCentreonTopic,=20 > postOffice=3DPostOfficeImpl=20 > [server=3DActiveMQServerImpl::serverUUID=3D772ad6f8-4630-11e7-93cd- > 02a837635b7b],=20 > temp=3Dfalse]@3e389a2d, filter=3Dnull, name=3DCent > reonTopic,=20 > clusterName=3DCentreonTopic772ad6f8-4630-11e7-93cd-02a837635b7b]],=20 > message=3DReference[722892]:NON- > RELIABLE:CoreMessage[messageID=3D722892,durable=3Dfalse,userID=3Dnull,prio= rit > y=3D0,=20 > timestamp=3D0,expiration=3D0 > , durable=3Dfalse,=20 > address=3DCentreonTopic,properties=3DTypedProperties[mqtt.message.retain=3D= fa > lse,mqtt.qos.level=3D0]]@1252621657:=20 > io.netty.util.internal.OutOfDirectMemoryError: failed to allocate=20 > 16777216 byte(s) > of direct memory (used: 1057466368, max:=20 > 1073741824)\r\n","stream":"stdout","time":"2017-06- > 01T19:31:54.239955162Z"} >=20 >=20 >=20 > Then after a couple of hours: >=20 > {"log":"23:22:24,013 WARN [io.netty.channel.DefaultChannelPipeline] > An=20 > exceptionCaught() event was fired, and it reached at the tail of the=20 > pipeline. It usually means the last handler in the pipeline did n > ot handle the exception.: > io.netty.util.internal.OutOfDirectMemoryError:=20 > failed to allocate 16777216 byte(s) of direct memory (used: > 1057466368,=20 > max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23 > :22:24.015087347Z"} > {"log":"23:22:24,014 WARN [io.netty.channel.DefaultChannelPipeline] > An=20 > exceptionCaught() event was fired, and it reached at the tail of the=20 > pipeline. It usually means the last handler in the pipeline did n > ot handle the exception.: > io.netty.util.internal.OutOfDirectMemoryError:=20 > failed to allocate 16777216 byte(s) of direct memory (used: > 1057466368,=20 > max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23 > :22:24.015759902Z"} > {"log":"23:22:24,015 WARN [io.netty.channel.DefaultChannelPipeline] > An=20 > exceptionCaught() event was fired, and it reached at the tail of the=20 > pipeline. It usually means the last handler in the pipeline did n > ot handle the exception.: > io.netty.util.internal.OutOfDirectMemoryError:=20 > failed to allocate 16777216 byte(s) of direct memory (used: > 1057466368,=20 > max: 1073741824)\r\n","stream":"stdout","time":"2017-06-01T23 > :22:24.016623101Z"} >=20 >=20 > And this message is looping and in 5 mins it's filled my 12GB drive. >=20 > Any clues what to do? I'll do some more debugging. >=20 > /hw