Return-Path: X-Original-To: apmail-activemq-users-archive@www.apache.org Delivered-To: apmail-activemq-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 95D35921B for ; Thu, 29 Nov 2012 23:43:14 +0000 (UTC) Received: (qmail 69587 invoked by uid 500); 29 Nov 2012 23:43:13 -0000 Delivered-To: apmail-activemq-users-archive@activemq.apache.org Received: (qmail 69556 invoked by uid 500); 29 Nov 2012 23:43:13 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 69545 invoked by uid 99); 29 Nov 2012 23:43:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Nov 2012 23:43:13 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of juanin@gmail.com designates 209.85.220.171 as permitted sender) Received: from [209.85.220.171] (HELO mail-vc0-f171.google.com) (209.85.220.171) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 29 Nov 2012 23:43:05 +0000 Received: by mail-vc0-f171.google.com with SMTP id fo14so13021648vcb.2 for ; Thu, 29 Nov 2012 15:42:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=K0AKXmaAdq3x0XfzlYiH32GuPH+ghcmn9XeeguVXpo8=; b=DT1lmdEM+sMEixUEeh/fcLY4o3XMhrdoDFABbzkBwtN41vnTK2ZVKBvv3dlU3m0D4o ng50jQkpupionbWgPHXmojm8DeZVQhCCnT5kKPgATrRVPIY+/Rk/pP6FPqWSUOWqX0aJ cAI11gg16TJl7XPQSRzqqYAauHS24KU0wd00s9yIy9/bVUqb1BYRvKtLC0ZmqvEm8xJ4 7Xhbgwl95KuQ3uOGfzfiT4zvtlLdCiCCK9fL5ieDDkxgWuuMh9sL+QXnl69eUq7rCoVe 2oDStHCU+rTUZEJbRfB0ixHmtAGDS1oHt3LHdTDAVZxJxdOVJqIpIfYGS3ffr0jmaXyU 8tfw== MIME-Version: 1.0 Received: by 10.220.241.141 with SMTP id le13mr32773657vcb.26.1354232564635; Thu, 29 Nov 2012 15:42:44 -0800 (PST) Received: by 10.58.124.230 with HTTP; Thu, 29 Nov 2012 15:42:44 -0800 (PST) In-Reply-To: References: Date: Thu, 29 Nov 2012 21:42:44 -0200 Message-ID: Subject: Re: Understanding memoryUsage (once again!) From: Juan Nin To: users@activemq.apache.org Content-Type: multipart/alternative; boundary=14dae9cdc32bf5567704cfaad68b X-Virus-Checked: Checked by ClamAV on apache.org --14dae9cdc32bf5567704cfaad68b Content-Type: text/plain; charset=ISO-8859-1 Right now I set it to 5mb per queue, but I guess could be even less. I tested using "0 mb" to see if it completely flushed to disk without using memory, but didn't work, in that case seems it's the same as not putting the memoryLimit at all. Is there any way to make them just go to disk without using memory at all? Now, once they go to disk, how does ActiveMQ process them upon consumption? Does it load to memory in chunks or something like that and keeps them in memory, or just grabs from disk as required? I'll have cases where consumptions will be one by one, sporadic by different processes (intermittent). But there will be other cases where even millions will be consumed at once by a multithreaded Java app. So for the intermittent ones it would be ok if it just loads them from disk, but for the other ones would be interesting if can load bigger chunks into memory for faster consumption. Not sure if possible though. Thanks again. On Thu, Nov 29, 2012 at 6:49 PM, Christian Posta wrote: > Another thought.. if you know they won't be immediately consumed (slow, or > even intermittent consumers), why keep them in memory at all? Or at least > why keep such a large number of them? Maybe turn down the memory limits on > the destinations with knowingly slow consumers so fewer messages are kept > in memory (they will be kept in store if they are persistent messages and > then recovered when they are ready to be dispatched). The only downside is > if the consumers speed up, you'll be fetching from disk more often. > > Would be nice if the broker could auto-tune its memory usage... uh oh... > > > > > > > On Thu, Nov 29, 2012 at 12:45 PM, Juan Nin wrote: > > > Hi Christian! > > > > Yes, actually that's what I'm doing, just setting per destination > policies > > which work for me. > > I anyway needed them because I'm creating queues with lots of messages, > > which won't be immediately consumed, so having them store a lot into > memory > > ended up slowing things up. > > > > So I just assigned enough memory to the broker so as not run into issues. > > > > Thanks again. > > > > > > On Tue, Nov 27, 2012 at 9:40 PM, Christian Posta > > wrote: > > > > > See inline... > > > > > > > > > On Wed, Nov 21, 2012 at 12:04 PM, Juan Nin wrote: > > > > > > > Hi! > > > > > > > > Sorry for the delay in replying, buried on a project. > > > > > > > > As I mentioned before, I had tested this with 5.7.0 with the same > > > > behaviour. > > > > I just tested it again (both with 5.3.2 and 5.7.0) and same thing, > and > > on > > > > my case it doesn't matter if there are consumers or not, it always > > seems > > > to > > > > make usage of the memory. > > > > > > > > Although I guess in theory that should not affect, did you use Stomp > > for > > > > your testing, or maybe you used Openwire? > > > > I'm using Stomp for my testing. > > > > > > > > Might be though that the broker's memory itself is not going beyond > 70% > > > of > > > > memoryUsage, but this is just per destination counters as you > > mentioned. > > > > In which case I guess the value shown as "Memory percent used" is a > bit > > > > confusing... But haven't had much time to really test the possibility > > of > > > > exhausting the broker's memory. > > > > > > > No, i believe what you're seeing is correct. The broker's memory limit > is > > > going beyond memoryUsage (way beyond). When a queue checks whether > memory > > > is full, it will only do something interesting if producer flow control > > is > > > enabled. Otherwise, it will continue on. You are seeing that it will > > > continue to add messages until the Queue's memory limit (40MB) reaches > > the > > > 70% mark. Since MemoryUsages are hierarchical, this means it will also > > > account for messages in the overall broker memory as well. For each > > queue, > > > you'll see that it will continue to hold 70% of 40MB of memory. What > you > > > want in this case (if there are no consumers, or slow consumers) is to > > > raise your system usage memory limit OR lower your per-destination > limits > > > OR lower your cursor highwatermark or a combination of all three. > > > > > > http://activemq.apache.org/per-destination-policies.html > > > > > > With PFC turned off, you're essentially telling the broker to take the > > > message no matter what. There is a point at which you will run out of > > > resources (memory, disk, etc). The trick is to find your use case and > > tune > > > for that. > > > > > > > > > > > > > > Will try to do some more testing soon... > > > > > > > > Thanks > > > > > > > > > > > > On Wed, Nov 21, 2012 at 2:28 PM, Christian Posta > > > > wrote: > > > > > > > > > Can you please try on 5.7? > > > > > I just tried a test, and if there are no consumers to the queue > then > > > the > > > > > memory usage will stay at 0%. The message will not be retained, ie, > > it > > > > will > > > > > be put into the store and kept there. If I add a consumer, and not > > try > > > to > > > > > consume, the message will be kept around in memory up to the cursor > > > high > > > > > watermark (70 by default). > > > > > > > > > > As I add more queues the same behavior as described above will > > happen. > > > > If I > > > > > attach consumers to the queues without consuming them (so no > messages > > > are > > > > > consumed), then messages are kept in the cursor up to the > high-water > > > > > mark... note.. the high-water mark is relative to the > > > > Destination/Cursor's > > > > > MemoryUsage, not the global memory usage. > > > > > > > > > > If I continue adding queues, and with producer flow control set to > > > > false, I > > > > > too will see the *Global* memory usage go much higher than 100%. > This > > > is > > > > > not surprising though, because as I understand, these usage memory > > > > objects > > > > > are really just counters. They don't enforce anything. When coupled > > > with > > > > > producer flow control, they can be used to determine when to enable > > > PFC. > > > > If > > > > > PFC is false, it's up to the cursor to determine when to flush out > to > > > > disk. > > > > > But each destination/cursor will have it's own system usage (with > the > > > > > global as the parent). > > > > > > > > > > Hope this helps. Can you please try with 5.7 and give us a report > > back? > > > > > Thanks, > > > > > Christian > > > > > > > > > > > > > > > > > > > > On Fri, Nov 16, 2012 at 11:38 AM, Juan Nin > wrote: > > > > > > > > > > > nope, adding a 3rd queue the 3rd one also gets this same value, > so > > > even > > > > > if > > > > > > it's the memory usage of the queue it's anyway going beyond.. > > > > > > > > > > > > > > > > > > On Fri, Nov 16, 2012 at 4:32 PM, Juan Nin > > wrote: > > > > > > > > > > > > > Might it be just a bug on how the MemoryPercentUsage is > > calculated? > > > > > > > > > > > > > > If I connect via JMX using console, I can see the > > > MemoryPercentUsage > > > > as > > > > > > > 112 right now. > > > > > > > If I go to each of the 2 queues on them I see CursorMemoryUsage > > > with > > > > > > value > > > > > > > 29360604, which would be 28mb each, summing a total of 56mb > > (just a > > > > bit > > > > > > > more than the specified memoryUsage of 50mb). > > > > > > > > > > > > > > Not sure I'm interpreting these values correctly though, first > > > time I > > > > > > > access it via jconsole... > > > > > > > > > > > > > > > > > > > > > On Fri, Nov 16, 2012 at 4:07 PM, Juan Nin > > > wrote: > > > > > > > > > > > > > >> On that config there's a 40mb memoryLimit per queue, but also > > > tested > > > > > it > > > > > > >> without it with same results. > > > > > > >> > > > > > > >> > > > > > > >> On Fri, Nov 16, 2012 at 4:05 PM, Juan Nin > > > wrote: > > > > > > >> > > > > > > >>> Hi Torsten! > > > > > > >>> > > > > > > >>> I'm using ActiveMQ 5.3.2, but also tested it on 5.7.0 with > the > > > same > > > > > > >>> results... > > > > > > >>> This is my 5.3.2 config: > > > > > > >>> > > > > > > >>> > > > > > >>> xmlns="http://www.springframework.org/schema/beans" > > > > > > >>> xmlns:amq="http://activemq.apache.org/schema/core" > > > > > > >>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > > > > > > >>> xsi:schemaLocation=" > > > http://www.springframework.org/schema/beans > > > > > > >>> > > http://www.springframework.org/schema/beans/spring-beans-2.0.xsd > > > > > > >>> http://activemq.apache.org/schema/core > > > > > > >>> http://activemq.apache.org/schema/core/activemq-core.xsd"> > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > > > > > > > > > > > > > > > > > > class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > file:${activemq.base}/conf/credentials.properties > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > >>> brokerName="localhost" dataDirectory="${activemq.base}/data" > > > > > > >>> destroyApplicationContextOnStop="true" > advisorySupport="false"> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > producerFlowControl="true" > > > > > > >>> memoryLimit="5mb"> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > producerFlowControl="false" > > > > > > >>> optimizedDispatch="true" memoryLimit="40mb"> > > > > > > >>> > > > > > > >>> > > > > > >>> queuePrefix="DLQ." useQueueForQueueMessages="true" /> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > >>> enableJournalDiskSyncs="false" indexWriteBatchSize="10000" > > > > > > >>> indexCacheSize="1000"/> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> Using just a simple PHP script with Stomp for feeding the > > queues > > > > > > >>> (running it twice with different queue name): > > > > > > >>> > > > > > > >>> > > > > > >>> > > > > > > >>> require_once("Stomp.php"); > > > > > > >>> > > > > > > >>> $amq = new Stomp("tcp://localhost:61613"); > > > > > > >>> $amq->connect(); > > > > > > >>> > > > > > > >>> for($i=1; $i <= 100000; $i++) > > > > > > >>> { > > > > > > >>> if($i%1000 == 0) > > > > > > >>> { > > > > > > >>> echo "\nmsg #: $i"; > > > > > > >>> } > > > > > > >>> $amq->send("/queue/test", "this is test message # $i" > > > > > > >>> ,array('persistent' => 'true')); > > > > > > >>> } > > > > > > >>> > > > > > > >>> $amq->disconnect(); > > > > > > >>> > > > > > > >>> ?> > > > > > > >>> > > > > > > >>> > > > > > > >>> > > > > > > >>> On Fri, Nov 16, 2012 at 3:47 PM, Torsten Mielke < > > > > > > torsten@fusesource.com>wrote: > > > > > > >>> > > > > > > >>>> Hello, > > > > > > >>>> > > > > > > >>>> See in-line response. > > > > > > >>>> > > > > > > >>>> On Nov 16, 2012, at 6:29 PM, Juan Nin wrote: > > > > > > >>>> > > > > > > >>>> > Hi! > > > > > > >>>> > > > > > > > >>>> > After some heavy digging about Producer Flow control and > the > > > > > > >>>> systemUsage > > > > > > >>>> > properties a couple of years ago, I thought I quite > > understood > > > > it. > > > > > > >>>> > But yesterday I found that one of my configs was not > > behaving > > > > > > exactly > > > > > > >>>> as I > > > > > > >>>> > expected, so started doing some tests, and I see certain > > > > > behaviours > > > > > > >>>> which > > > > > > >>>> > don't seem to match what the docs and posts that I find on > > the > > > > > list > > > > > > or > > > > > > >>>> > other forums say. > > > > > > >>>> > > > > > > > >>>> > "storeUsage" is perfectly clear, it's the max space that > > > > > persistent > > > > > > >>>> > messages can use to be stored in disk. > > > > > > >>>> > "tempUsage"" applies to file cursors on non-persistent > > > messages, > > > > > so > > > > > > >>>> as to > > > > > > >>>> > flush to disk if memory limits are reached (I don't care > > much > > > > > about > > > > > > >>>> this > > > > > > >>>> > one anyway, I always use persistent messages). > > > > > > >>>> > > > > > > >>>> Correct. > > > > > > >>>> > > > > > > >>>> > > > > > > > >>>> > Now, according to most posts, memoryUsage would be the > > maximum > > > > > > memory > > > > > > >>>> that > > > > > > >>>> > the broker would be available to use. > > > > > > >>>> > On this post: > > > > > > >>>> > > > > > > > >>>> > > > > > > > > > > > > > > > > > > > > > http://stackoverflow.com/questions/7646057/activemq-destinationpolicy-and-systemusage-configurationit > > > > > > >>>> > says that "memoryUsage corresponds to the amount of memory > > > > that's > > > > > > >>>> > assigned to the in-memory store". > > > > > > >>>> > > > > > > >>>> Correct. > > > > > > >>>> > > > > > > >>>> > > > > > > > >>>> > For example, on my tests using the following config (only > > > > showing > > > > > > >>>> relevant > > > > > > >>>> > parts): > > > > > > >>>> > > > > > > > >>>> > > > > > > >>>> optimizedDispatch="true"> > > > > > > >>>> > > > > > > > >>>> > > > > > > >>>> > useQueueForQueueMessages="true" /> > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > With that config I would expect the broker to use 100 mb > of > > > > > maximum > > > > > > >>>> memory > > > > > > >>>> > among all queues. So it could maybe use 30mb in one queue > > and > > > > 70mb > > > > > > in > > > > > > >>>> > second queue. > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > 1) What I'm seeing is that if I start feeding a queue > > without > > > > > > >>>> consuming it, > > > > > > >>>> > the "Memory percent used" grows up to 70%, after that it > > > doesn't > > > > > > grow > > > > > > >>>> > anymore. > > > > > > >>>> > What is it doing exactly there? The first 70% is stored in > > > > memory > > > > > > >>>> (apart > > > > > > >>>> > from disk since it's persistent), and all the rest that > > > > continues > > > > > > >>>> being fed > > > > > > >>>> > goes just to disk? > > > > > > >>>> > > > > > > >>>> This behavior is correct. For queues the default cursor is > > store > > > > > > >>>> cursor. It keeps any newly arrived msgs in memory as long as > > it > > > > does > > > > > > not > > > > > > >>>> reach the configured memory limit (either configured on the > > > queue > > > > > per > > > > > > >>>> destination or globally in memoryUsage settings). > > > > > > >>>> Once the cursor reaches 70% of the configured limit (in your > > > case > > > > of > > > > > > >>>> the memoryUsage limit since you don't specify a > > per-destination > > > > > > limit), it > > > > > > >>>> will not keep any more messages in memory. > > > > > > >>>> Instead it will reload these messages from the store when > its > > > time > > > > > to > > > > > > >>>> dispatch them. The broker anyway persists any msgs it > receives > > > > > before > > > > > > >>>> passing on to the cursor. > > > > > > >>>> This limit of 70% can be configured and raised to e..g 100%. > > > > > > >>>> This behavior is kind of an optimization. That way you run > > less > > > > > often > > > > > > >>>> into producer-flow-control. > > > > > > >>>> As long as the persistence store is not running full, there > is > > > no > > > > > need > > > > > > >>>> to block producers, since the cursor can also load the > > messages > > > > from > > > > > > the > > > > > > >>>> store and does not necessarily have to keep them in memory. > > > > > > >>>> If you configure the vmQueueCursor, then the behavior is > > > > different. > > > > > > >>>> This cursor will not be able to load msgs to the store but > > needs > > > > to > > > > > > keep > > > > > > >>>> them all in memory. The vmQueueCursor used to be the default > > > > cursor > > > > > in > > > > > > >>>> older version of AMQ. > > > > > > >>>> > > > > > > >>>> Also note that topic msgs and non-persistent queue messages > > are > > > > not > > > > > > >>>> handled by the store cursor. These msgs are held in memory > and > > > if > > > > > > memory > > > > > > >>>> runs low, get swapped out to temp storage. > > > > > > >>>> > > > > > > >>>> > 2) If then I start feeding a 2nd queue, "Memory percent > > used" > > > > > > >>>> continues > > > > > > >>>> > growing until it reaches 140%. So it looks like > memoryUsage > > > does > > > > > not > > > > > > >>>> apply > > > > > > >>>> > globally, but on a per queue basis? > > > > > > >>>> > > > > > > >>>> What version of AMQ do you use? The sum of the memory usage > of > > > all > > > > > > >>>> queues should not go any higher than the configured > > memoryUsage > > > > > > limit. If > > > > > > >>>> you're not on 5.5.1 or higher releases, then I suggest to > > > upgrade. > > > > > > >>>> > > > > > > >>>> > Using memoryLimit on the queue's policyEntry gives more > > > control > > > > > over > > > > > > >>>> this, > > > > > > >>>> > but it's just a variation, "Memory percent used" can grow > > more > > > > > than > > > > > > >>>> 100% > > > > > > >>>> > anyway. > > > > > > >>>> > > > > > > >>>> With the default store cursor this should not be the case > from > > > > what > > > > > I > > > > > > >>>> know. > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> > > > > > > > >>>> > 3) If #2 is true, then how would I prevent the broker from > > > > running > > > > > > >>>> out of > > > > > > >>>> > memory in case queues would continue to be created? > > > > > > >>>> > > > > > > >>>> Just like above comment. I would expect the brokers > > > > > MemoryPercentUsage > > > > > > >>>> won't grow over 100% and the destinations MemoryPercentUsage > > > > remains > > > > > > fairly > > > > > > >>>> much at 70%. > > > > > > >>>> Not sure why you would see a different behavior? Using an > old > > > > > version > > > > > > >>>> of AMQ perhaps? Or explicitly configuring for the > > vmQueueCursor? > > > > > > >>>> Could you perhaps also test with > > > > > > >>>> > > > > > > >>>> > > > > > > > >>>> > > > > > > > >>>> > Maybe I'm misunderstanding and some of these settings make > > no > > > > > sense > > > > > > >>>> when > > > > > > >>>> > producerFlowControl is disabled? > > > > > > >>>> > > > > > > > >>>> > Thanks in advance. > > > > > > >>>> > > > > > > > >>>> > Juan > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> Regards, > > > > > > >>>> > > > > > > >>>> Torsten Mielke > > > > > > >>>> torsten@fusesource.com > > > > > > >>>> tmielke.blogspot.com > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> > > > > > > >>>> > > > > > > >>> > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > > > *Christian Posta* > > > > > http://www.christianposta.com/blog > > > > > twitter: @christianposta > > > > > > > > > > > > > > > > > > > > > -- > > > *Christian Posta* > > > http://www.christianposta.com/blog > > > twitter: @christianposta > > > > > > > > > -- > *Christian Posta* > http://www.christianposta.com/blog > twitter: @christianposta > --14dae9cdc32bf5567704cfaad68b--