Return-Path: Delivered-To: apmail-activemq-users-archive@www.apache.org Received: (qmail 68885 invoked from network); 12 Aug 2010 16:00:31 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 12 Aug 2010 16:00:31 -0000 Received: (qmail 77519 invoked by uid 500); 12 Aug 2010 16:00:31 -0000 Delivered-To: apmail-activemq-users-archive@activemq.apache.org Received: (qmail 77471 invoked by uid 500); 12 Aug 2010 16:00:30 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 77463 invoked by uid 99); 12 Aug 2010 16:00:30 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Aug 2010 16:00:30 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_HELO_PASS,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of greg.l.robillard@lmco.com designates 192.31.106.12 as permitted sender) Received: from [192.31.106.12] (HELO mailfo01.lmco.com) (192.31.106.12) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 12 Aug 2010 16:00:21 +0000 Received: from mailgw1a.lmco.com (ppalertrelay.lmco.com [192.31.106.7]) by mailfo01.lmco.com (8.14.3/8.14.3) with ESMTP id o7CFxxQb018476 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Thu, 12 Aug 2010 17:00:00 +0100 Received: from emss07g01.ems.lmco.com (relay5.ems.lmco.com [166.29.2.16])by mailgw1a.lmco.com (LM-6) with ESMTP id o7CFxxfX010413for ; Thu, 12 Aug 2010 09:59:59 -0600 (MDT) Received: from CONVERSION2-DAEMON.lmco.com by lmco.com (PMDF V6.4 #31805) id <0L7100A01R3SV9@lmco.com> for users@activemq.apache.org; Thu, 12 Aug 2010 15:59:59 +0000 (GMT) Received: from hdxhtpn3.us.lmco.com ([166.29.7.65]) by lmco.com (PMDF V6.4 #31805) with ESMTP id <0L71004AKR3TQV@lmco.com> for users@activemq.apache.org; Thu, 12 Aug 2010 15:59:53 +0000 (GMT) Received: from HDXMSPA.us.lmco.com ([166.29.7.95]) by hdxhtpn3.us.lmco.com ([166.29.7.65]) with mapi; Thu, 12 Aug 2010 09:59:53 -0600 Date: Thu, 12 Aug 2010 09:59:51 -0600 From: "Robillard, Greg L" Subject: ActiveMQ disconnects producer with large message throughput To: "users@activemq.apache.org" Message-id: <6A590AB4AFE3F5499A957AB58AB834BF138A470A0D@HDXMSPA.us.lmco.com> MIME-version: 1.0 Content-type: multipart/alternative; boundary="Boundary_(ID_T7vqRw+a8bXGMMjCQl86sQ)" Content-language: en-US Thread-Topic: ActiveMQ disconnects producer with large message throughput Thread-Index: Acs6N2TNRNUIxdE2TbKEX9KrJmmlYQ== Accept-Language: en-US acceptlanguage: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: --Boundary_(ID_T7vqRw+a8bXGMMjCQl86sQ) Content-type: text/plain; charset=us-ascii Content-transfer-encoding: 7BIT Not certain where to begin. apache-activemq-5.3.2 using non-persistent queues using openwire jms connections Problem described: Normal operation has about 30 clients connected receiving between 300 and 500 messages per minute. Problem occurs if a single client configures a large amount of data. This can get a single client to receive up to 10,000 messages per minute. The message size is small, generally at or under 1K. Initially producerFlowControl was set to true, but this shut the producer down for everyone. ProducerFlowControl is now set to false. The client queue size continues to fill (NotificationQueueSizeExceeded). This happens primarily on slower networks and client computers. Faster networks and client computers can often handle this data rate. What I currently do, is trap this situation, and log the client off since their client cannot keep up with the data rate. The problem specifically is that sometimes, when a client is bringing in large amounts of data, activemq sometimes simply runs out of memory and shuts the producer down and all of the clients. Currently, the only way I have been able to recover from this is to restart activemq and the producer. I am looking for what needs to be done to keep activemq from running out of memory if these large data rates happen when I am not using producerflowcontrol. Additionally, how can I recover if this situation happens. I have attempted to increase the prefetch size, to increase throughput, to no avail. I was using vmQueueCursor, but am currently attempting fileQueueCursor. Any suggestions or ideas would be of great help. Greg --Boundary_(ID_T7vqRw+a8bXGMMjCQl86sQ)--