Return-Path: X-Original-To: apmail-activemq-users-archive@www.apache.org Delivered-To: apmail-activemq-users-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0C6B010348 for ; Wed, 25 Feb 2015 02:04:18 +0000 (UTC) Received: (qmail 64673 invoked by uid 500); 25 Feb 2015 02:04:14 -0000 Delivered-To: apmail-activemq-users-archive@activemq.apache.org Received: (qmail 64616 invoked by uid 500); 25 Feb 2015 02:04:14 -0000 Mailing-List: contact users-help@activemq.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: users@activemq.apache.org Delivered-To: mailing list users@activemq.apache.org Received: (qmail 18841 invoked by uid 99); 25 Feb 2015 01:37:34 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Feb 2015 01:37:34 +0000 X-ASF-Spam-Status: No, hits=1.8 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS,T_REMOTE_IMAGE X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of burtonator2011@gmail.com designates 209.85.218.53 as permitted sender) Received: from [209.85.218.53] (HELO mail-oi0-f53.google.com) (209.85.218.53) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 25 Feb 2015 01:37:30 +0000 Received: by mail-oi0-f53.google.com with SMTP id u20so669926oif.12 for ; Tue, 24 Feb 2015 17:36:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=0LaIwsmkv2iIeACa6HaBp476rmpbD5s5jgeiKRByma8=; b=VWtgpgg9W6rqWPfRSNWl+NEsDJMTiRd8Pi1IRUuUOOrzlxtzFMTxu1J2hW5lBEvtlz XBiZjoQBVU5ZMdq/6NrXFPZKkl9iqkriBYosyr31ACiDMNIerC/8DZbL7/acFayquHKW c7hMy/fqB5RWoAsQ4MaWOxQsQdQP4p1KjQlt7vrA/vKPdiQsoXAX+GLDw1yXzNT2EEm2 CeP/8/FgE+P68Ad1aFyBuvJhDUOwIkVlbd00AlvAjqXXcdbX5aixA2f/QHz87C/90mBr RFBGQbWmTxhl16B+7pTppHD0dqLId89Ht7Q09na6pJqi5NdWmCxdrZvXF0e/+7XAjayk tqhQ== X-Received: by 10.202.188.66 with SMTP id m63mr567303oif.14.1424828184485; Tue, 24 Feb 2015 17:36:24 -0800 (PST) MIME-Version: 1.0 Sender: burtonator2011@gmail.com Received: by 10.183.6.234 with HTTP; Tue, 24 Feb 2015 17:36:04 -0800 (PST) In-Reply-To: <54ED09BE.4070909@gmail.com> References: <54ECBE4D.2010309@gmail.com> <54ECC903.1030905@gmail.com> <54ED09BE.4070909@gmail.com> From: Kevin Burton Date: Tue, 24 Feb 2015 17:36:04 -0800 X-Google-Sender-Auth: UY53d_g-gllK1Iie7Ufirp72Dfg Message-ID: Subject: Re: A proposal to rewrite purgeInactiveDestinations locking to prevent queue GC lockups. To: users@activemq.apache.org Content-Type: multipart/alternative; boundary=001a113dd688cd669f050fdfa929 X-Virus-Checked: Checked by ClamAV on apache.org --001a113dd688cd669f050fdfa929 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Yes. Agreed that they look like integration tests. More than a minute is more of an integration test. What I=E2=80=99ll do is come up with a proposal once I have our internal co= de testing with parallelism. My ideal is to use the maven dependency graph and possibly something like codeship=E2=80=99s new ParallelCI to build the dependency graph in parallel= . If a specific module=E2=80=99s tests take too long you can split them into = N modules, each with a few tests. With the right among of parallelism I bet you could get the tests down to 30 minutes =E2=80=A6 maximum. Kevin On Tue, Feb 24, 2015 at 3:31 PM, Hadrian Zbarcea wrote= : > Kevin, > > Go for it. > > Yes, out tests need a lot of love. Actually most of the tests are not > really unit tests, but more like integration tests. > Helping with the spring (as in season) cleanup would be immensely > appreciated. > > Hadrian > > > > On 02/24/2015 02:18 PM, Kevin Burton wrote: > >> Ah. OK=E2=80=A6 I assume continuous integration isn=E2=80=99t a goal. = I=E2=80=99m upset if my >> testing takes more than 20 minutes :-P >> >> I bet that this would be taken down to an hour if they were parallelized= . >> But of course someone needs to take the time to get that done. >> >> I might be allocating some of our internal dev resources to getting our >> build to parallelize across containers. If this generalizable to other >> projects I=E2=80=99ll OSS it.. Would be great to get better testing feed= back. >> >> Right now I think my patches are ready to be merged but I=E2=80=99d feel= more >> comfortable suggesting it if I knew the tests worked. >> >> Should I propose it now and if there are any bugs we can fix them later? >> >> Kevin >> >> On Tue, Feb 24, 2015 at 10:54 AM, Timothy Bish >> wrote: >> >> On 02/24/2015 01:22 PM, Kevin Burton wrote: >>> >>> oh geez=E2=80=A6 seriously though. can you give me an estimate? >>>> >>>> Assuming 30 seconds each I=E2=80=99m assuming about 11 hours. >>>> >>>> That's probably not to far off, usually just let them run over night >>> for >>> the all profile. >>> >>> >>> http://activemq.apache.org/junit-reports.html >>>> https://hudson.apache.org/hudson/job/ActiveMQ/ >>>> Hudson is down for ActiveMQ. >>>> >>>> Also, is there a disk space requirement for the tests? I have 10GB fre= e >>>> or >>>> so and I=E2=80=99m getting TONS of these: >>>> >>>> I don't think anyone has looked into the min required free for the >>> tests >>> to run. >>> >>> >>> >>> java.lang.RuntimeException: Failed to start provided job scheduler >>>> store: >>>> JobSchedulerStore:activemq-data >>>> >>>> at java.io.RandomAccessFile.seek(Native Method) >>>> >>>> at >>>> org.apache.activemq.util.RecoverableRandomAccessFile.seek( >>>> RecoverableRandomAccessFile.java:384) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.disk.page.PageFile. >>>> readPage(PageFile.java:877) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.disk.page.Transaction. >>>> load(Transaction.java:427) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.disk.page.Transaction. >>>> load(Transaction.java:377) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.disk.index.BTreeIndex. >>>> load(BTreeIndex.java:159) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl$ >>>> MetaData.load(JobSchedulerStoreImpl.java:90) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl$3. >>>> execute(JobSchedulerStoreImpl.java:277) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.disk.page.Transaction. >>>> execute(Transaction.java:779) >>>> >>>> at >>>> org.apache.activemq.store.kahadb.scheduler. >>>> JobSchedulerStoreImpl.doStart( >>>> JobSchedulerStoreImpl.java:261) >>>> >>>> at org.apache.activemq.util.ServiceSupport.start( >>>> ServiceSupport.java:55) >>>> >>>> at >>>> org.apache.activemq.broker.BrokerService.setJobSchedulerStore( >>>> BrokerService.java:1891) >>>> >>>> at >>>> org.apache.activemq.transport.stomp.StompTestSupport. >>>> createBroker(StompTestSupport.java:174) >>>> >>>> at >>>> org.apache.activemq.transport.stomp.StompTestSupport. >>>> startBroker(StompTestSupport.java:112) >>>> >>>> at >>>> org.apache.activemq.transport.stomp.StompTestSupport.setUp( >>>> StompTestSupport.java:94) >>>> >>>> at >>>> org.apache.activemq.transport.stomp.Stomp12Test.setUp( >>>> Stomp12Test.java:41) >>>> >>>> at >>>> org.apache.activemq.transport.stomp.Stomp12SslAuthTest. >>>> setUp(Stomp12SslAuthTest.java:38) >>>> >>>> On Tue, Feb 24, 2015 at 10:09 AM, Timothy Bish >>>> wrote: >>>> >>>> On 02/24/2015 01:01 PM, Kevin Burton wrote: >>>> >>>>> How long do the tests usually take? >>>>> >>>>>> An eternity + 1hr >>>>>> >>>>> I=E2=80=99m looking at 45 minutes right now >>>>> >>>>> before I gave up=E2=80=A6 I think part of it was that the box I was = testing on >>>>>> was >>>>>> virtualized and didn=E2=80=99t have enough resources. >>>>>> >>>>>> I tried to parallelize the tests (-T 8 with maven) but I got other >>>>>> errors >>>>>> so I assume the ports are singletons. >>>>>> >>>>>> They won't be happy if you run them parallel. >>>>>> >>>>> >>>>> >>>>> On Tue, Feb 24, 2015 at 8:03 AM, Gary Tully >>>>> >>>>>> wrote: >>>>>> >>>>>> if there are any test failures - try to run them individually >>>>>> >>>>>> -Dtest=3Da,b etc. There may be an issue with a full test run, but a= ll >>>>>>> of >>>>>>> the tests that are enabled should work. I know there are some issue= s >>>>>>> with jdbc tests that hang or fail due to previous runs no cleaning >>>>>>> up, >>>>>>> but that should be the most of it. I got a bunch of full test runs >>>>>>> before the 5.11 release if that is any help. >>>>>>> >>>>>>> On 23 February 2015 at 20:38, Kevin Burton >>>>>>> wrote: >>>>>>> >>>>>>> OK. This is ready to go and I have a patch branch: >>>>>>> >>>>>>>> https://issues.apache.org/jira/browse/AMQ-5609 >>>>>>>> >>>>>>>> I=E2=80=99m stuck at the moment though because tests don=E2=80=99t= pass. But it was >>>>>>>> failing tests before so I don=E2=80=99t think it has anything to d= o with my >>>>>>>> changes. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 11:11 PM, Kevin Burton >>>>>>>> >>>>>>>> wrote: >>>>>>>> >>>>>>> Actually, is the lock even needed here? Why would it be? if we= =E2=80=99re >>>>>>> >>>>>>>> *removing* a subscription, why does it care if we possibly ALSO >>>>>>>>> remove >>>>>>>>> a >>>>>>>>> separate / isolated queue before/after the subscription is remove= d. >>>>>>>>> >>>>>>>>> I think this is redundant and can be removed. Maybe I=E2=80=99m = wrong >>>>>>>>> though. >>>>>>>>> >>>>>>>>> I looked at all the callers and none were associated with queues. >>>>>>>>> >>>>>>>>> On Sun, Feb 22, 2015 at 11:07 PM, Kevin Burton >>>>>>>> > >>>>>>>>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>> So I have some working/theoretical code that should resolve this. >>>>>>>> >>>>>>>> It acquires a lock *per* ActiveMQDestination, this way there is n= o >>>>>>>>> >>>>>>>>>> lock >>>>>>>>>> contention. >>>>>>>>>> >>>>>>>>>> But here=E2=80=99s where I=E2=80=99m stuck. >>>>>>>>>> >>>>>>>>>> @Override >>>>>>>>>> >>>>>>>>>> public void removeSubscription(ConnectionContext context= , >>>>>>>>>> >>>>>>>>>>> RemoveSubscriptionInfo info) throws Exception { >>>>>>>>>>> inactiveDestinationsPurgeLock.readLock().lock(); >>>>>>>>>>> try { >>>>>>>>>>> topicRegion.removeSubscription(context, info); >>>>>>>>>>> } finally { >>>>>>>>>>> inactiveDestinationsPurgeLock. >>>>>>>>>>> readLock().unlock(); >>>>>>>>>>> } >>>>>>>>>>> } >>>>>>>>>>> >>>>>>>>>>> .. this is in RegionBroker. >>>>>>>>>>> >>>>>>>>>> There is no ActiveMQDestination involved here so I=E2=80=99m not= sure the >>>>>>>>>> best >>>>>>>>>> way to resolve this. >>>>>>>>>> >>>>>>>>>> Any advice? >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sun, Feb 22, 2015 at 8:11 PM, Kevin Burton >>>>>>>>> > >>>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>> Yes. That was my thinking too.. that just replacing the >>>>>>>> >>>>>>>> CopyOnWriteArraySet >>>>>>>>> with something more performance would solve the issue. >>>>>>>>> This would also improve queue creation time as well as queue >>>>>>>>> deletion >>>>>>>>> >>>>>>>>>> time. >>>>>>>>>>> >>>>>>>>>>> What I think I=E2=80=9Dm going to do in the mean time is: >>>>>>>>>>> >>>>>>>>>>> - implement a granular lock based on queue name=E2=80=A6 I am g= oing to >>>>>>>>>>> use >>>>>>>>>>> an >>>>>>>>>>> interface so we can replace the implementation later. >>>>>>>>>>> >>>>>>>>>>> - implement timing for the purge thread so it tracks how long i= t >>>>>>>>>>> takes >>>>>>>>>>> to remove a queue but also how long the entire loop takes. >>>>>>>>>>> >>>>>>>>>>> I=E2=80=99ll do this on a branch so it should be easy to merge. >>>>>>>>>>> >>>>>>>>>>> On Sun, Feb 22, 2015 at 7:40 PM, Tim Bain >>>>>>>>>> > >>>>>>>>>>> >>>>>>>>>>> wrote: >>>>>>>>>>> >>>>>>>>>> A decent amount of the time is being spent calling remove() on >>>>>>>>> various >>>>>>>>> >>>>>>>>>> array-backed collections. Those data structures might be >>>>>>>>>> >>>>>>>>> inappropriate >>>>>>>>> >>>>>>>>>> for >>>>>>>>>> >>>>>>>>> the number of destinations you're running, since array-backed >>>>>>>>> >>>>>>>>>> collections >>>>>>>>>>>> tend to have add/remove operations that are O(N); some >>>>>>>>>>>> improvement >>>>>>>>>>>> >>>>>>>>>>>> might >>>>>>>>>>>> >>>>>>>>>>> come from something as simple as moving to a ConcurrentHashSet >>>>>>>>>> >>>>>>>>> instead >>>>>>>>> >>>>>>>>>> of a >>>>>>>>>> >>>>>>>>> CopyOnWriteArraySet, for example. (Or it might make performance >>>>>>>>> >>>>>>>>>> worse >>>>>>>>>>>> >>>>>>>>>>> because of other aspects of how those collections are used; >>>>>>>>>> people >>>>>>>>>> >>>>>>>>> other >>>>>>>>> >>>>>>>>>> than me would be in a better position to evaluate the full range >>>>>>>>>> of >>>>>>>>>> >>>>>>>>> performance requirements for those collections.) >>>>>>>>> >>>>>>>>>> Scheduler.cancel() also takes an alarming amount of time for wha= t >>>>>>>>>>>> >>>>>>>>>>>> looks >>>>>>>>>>>> >>>>>>>>>>> like a really simple method ( >>>>>>>>>> >>>>>>>>>>> http://grepcode.com/file/repo1.maven.org/maven2/org. >>>>>>>>>>>> >>>>>>>>>>>> apache.activemq/activemq-all/5.10.0/org/apache/activemq/ >>>>>>>>>>> >>>>>>>>>> thread/Scheduler.java#Scheduler.cancel%28java.lang.Runnable%29 >>>>>>> >>>>>>> ). >>>>>>> >>>>>>>> On Sun, Feb 22, 2015 at 7:56 PM, Kevin Burton >>>>>>>>> >>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>>> Here=E2=80=99s a jprofiler view with the advisory support e= nabled if >>>>>>>>>>>> you=E2=80=99re >>>>>>>>>>>> >>>>>>>>>>>> curious. >>>>>>>>>>>>> >>>>>>>>>>>>> http://i.imgur.com/I1jesZz.jpg >>>>>>>>>>>>> >>>>>>>>>>>>> I=E2=80=99m not familiar with the internals of ActiveMQ enoug= h to have >>>>>>>>>>>>> any >>>>>>>>>>>>> >>>>>>>>>>>>> obvious >>>>>>>>>>>>> >>>>>>>>>>>> optimization ideas. >>>>>>>>>>>> >>>>>>>>>>>>> One other idea I had (which would require a ton of refactorin= g >>>>>>>>>>>>> I >>>>>>>>>>>>> >>>>>>>>>>>>> think) >>>>>>>>>>>>> >>>>>>>>>>>> would be to potentially bulk delete all the queues at once. >>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 22, 2015 at 6:42 PM, Kevin Burton < >>>>>>>>>>>>> burton@spinn3r.com> >>>>>>>>>>>>> >>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>> And spending some more time in jprofiler, looks like 20% of >>>>>>>>>>>> this >>>>>>>>>>>> >>>>>>>>>>>>> is >>>>>>>>>>>>> >>>>>>>>>>>>> do to >>>>>>>>>>>> >>>>>>>>>>> schedulerSupport and the other 80% of this is do to >>>>>>>>> >>>>>>>>>> advisorySupport. >>>>>>>>>>>>> >>>>>>>>>>>>> if I set both to false the total runtime of my tests drops i= n >>>>>>>>>>>> >>>>>>>>>>> half=E2=80=A6 >>>>>>>>> >>>>>>>>>> and >>>>>>>>>>>> >>>>>>>>>>> the latencies fall from >>>>>>>>> >>>>>>>>>> max create producer latency: 10,176 ms >>>>>>>>>>>>>> >>>>>>>>>>>>>> max create message on existing producer and consumer: 2 ms >>>>>>>>>>>>>> >>>>>>>>>>>>>>> =E2=80=A6 to >>>>>>>>>>>>>>> >>>>>>>>>>>>>> max create producer latency: 1 ms >>>>>>>>>>>>>> >>>>>>>>>>>>>> max create message on existing producer and consumer: 1 ms >>>>>>>>>>>>>> >>>>>>>>>>>>>>> and this isn=E2=80=99t without fixing the purge backgroun= d lock. >>>>>>>>>>>>>>> >>>>>>>>>>>>>> So the question now is what the heck is the advisory support >>>>>>>>>>>>>> >>>>>>>>>>>>>> doing >>>>>>>>>>>>>> >>>>>>>>>>>>> that >>>>>>>>>>>> >>>>>>>>>>> can result in such massive performance overhead. >>>>>>>>> >>>>>>>>>> =E2=80=A6 and I think advisorySupport is enabled by default so t= hat=E2=80=99s >>>>>>>>>>>>>> >>>>>>>>>>>>>> problematic >>>>>>>>>>>>>> >>>>>>>>>>>>> as well. >>>>>>>>>>>>> >>>>>>>>>>>>> On Sun, Feb 22, 2015 at 4:45 PM, Kevin Burton < >>>>>>>>>>>>>> >>>>>>>>>>>>>> burton@spinn3r.com> >>>>>>>>>>>>>> >>>>>>>>>>>>> wrote: >>>>>>>>>>>> >>>>>>>>>>> OK. Loaded up JProfiler and confirmed that it=E2=80=99s not Le= velDB. >>>>>>>>> >>>>>>>>>> This is a non-persistent broker I=E2=80=99m testing on. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Looks like it=E2=80=99s spending all it=E2=80=99s time in >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> CopyOnWriteArrayList.remove >>>>>>>>>>>>>>> >>>>>>>>>>>>>> and >>>>>>>>>>>>> >>>>>>>>>>>>> Timer.purge=E2=80=A6 >>>>>>>>>>>>> >>>>>>>>>>>>>> Which is hopeful because this is ALL due to ActiveMQ interna= ls >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> and >>>>>>>>>>>>>>> >>>>>>>>>>>>>> in >>>>>>>>>>>>> >>>>>>>>>>>> theory LevelDB should perform well if we improve the >>>>>>>>> >>>>>>>>>> performance of >>>>>>>>>>>>>> >>>>>>>>>>>>>> ActiveMQ internals and fix this lock bug. >>>>>>>>>>>>> >>>>>>>>>>>> Which would rock! >>>>>>>>> >>>>>>>>>> It should ALSO make queue creation faster. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Sun, Feb 22, 2015 at 4:10 PM, Kevin Burton < >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> burton@spinn3r.com> >>>>>>>>>>>>>>> >>>>>>>>>>>>>> wrote: >>>>>>>>>>>>> >>>>>>>>>>>> On Sun, Feb 22, 2015 at 3:30 PM, Tim Bain < >>>>>>>>> >>>>>>>>>> tbain@alumni.duke.edu> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>> So if LevelDB cleanup during removeDestination() is the >>>>>>>>> >>>>>>>>>> presumed >>>>>>>>>>>>>>> culprit, >>>>>>>>>>>>>>> >>>>>>>>>>>>>> can we spin off the LevelDB cleanup work into a separate >>>>>>>>> >>>>>>>>>> thread >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> (better: a >>>>>>>>>>>>>>> >>>>>>>>>>>>>> task object to be serviced by a ThreadPool so you can avoid = a >>>>>>>>> >>>>>>>>>> fork >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> bomb >>>>>>>>>>>>>>> >>>>>>>>>>>>>> if >>>>>>>>>>>>> >>>>>>>>>>>>>> we remove many destinations at once) so the call to >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> removeDestination() >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> can >>>>>>>>>>>>>>> return quickly and LevelDB can do its record-keeping in the >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> background >>>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> without blocking message-processing? >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Would that be possible? If the delete is pending on >>>>>>>>>>>>>> ActiveMQ >>>>>>>>>>>>>> >>>>>>>>>>>>>>> there is >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> a >>>>>>>>>>>>>> >>>>>>>>>>>>> race where a producer could re-create it unless the lock is >>>>>>>>>>>>> >>>>>>>>>>>>>> held. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Though I guess if you dispatched to the GC thread WITH the >>>>>>>>>>>>>> lock >>>>>>>>>>>>>> >>>>>>>>>>>>> still >>>>>>>>> >>>>>>>>>> held you would be ok but I think if we use the existing purge >>>>>>>>>>>>>> thread >>>>>>>>>>>>>> then >>>>>>>>>>>>>> >>>>>>>>>>>>> we=E2=80=99re fine. >>>>>>>>>>>>> >>>>>>>>>>>>>> OK. I think I=E2=80=99m wrong about LevelDB being the issue.= To be >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> fair I >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> wasn=E2=80=99t 100% certain before but I should have specif= ied. >>>>>>>>>>>>>> >>>>>>>>>>>>> Our current production broker is running with >>>>>>>>> >>>>>>>>>> persistent=3Dfalse.. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> .and I >>>>>>>>>>>>>> >>>>>>>>>>>>> just re-ran the tests without disk persistence and it has the >>>>>>>>> >>>>>>>>>> same >>>>>>>>>>>>>> problem. >>>>>>>>>>>>>> >>>>>>>>>>>>> So the main issue how is why the heck is ActiveMQ taking SO >>>>>>>>> >>>>>>>>>> LONG >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> to GC >>>>>>>>>>>>>> >>>>>>>>>>>>> a >>>>>>>>> >>>>>>>>>> queue. It=E2=80=99s taking about 100ms which is an insane amo= unt of >>>>>>>>>>>>> >>>>>>>>>>>>>> time >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> considering this is done all in memory. >>>>>>>>>>>>>> >>>>>>>>>>>>> Kevin >>>>>>>>> >>>>>>>>>> -- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> >>>>>>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> >>>>>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> >>>>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>>> Location: *San Francisco, CA* >>>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>> Founder/CEO Spinn3r.com >>>>>>>>> Location: *San Francisco, CA* >>>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> >>>>>>>> Founder/CEO Spinn3r.com >>>>>>>> Location: *San Francisco, CA* >>>>>>>> blog: http://burtonator.wordpress.com >>>>>>>> =E2=80=A6 or check out my Google+ profile >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>> >>>>> Tim Bish >>>>> Sr Software Engineer | RedHat Inc. >>>>> tim.bish@redhat.com | www.redhat.com >>>>> skype: tabish121 | twitter: @tabish121 >>>>> blog: http://timbish.blogspot.com/ >>>>> >>>>> >>>>> >>>>> -- >>> Tim Bish >>> Sr Software Engineer | RedHat Inc. >>> tim.bish@redhat.com | www.redhat.com >>> skype: tabish121 | twitter: @tabish121 >>> blog: http://timbish.blogspot.com/ >>> >>> >>> >> > --=20 Founder/CEO Spinn3r.com Location: *San Francisco, CA* blog: http://burtonator.wordpress.com =E2=80=A6 or check out my Google+ profile --001a113dd688cd669f050fdfa929--