Return-Path: Delivered-To: apmail-incubator-jspwiki-dev-archive@minotaur.apache.org Received: (qmail 16957 invoked from network); 29 Nov 2009 00:09:38 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 29 Nov 2009 00:09:38 -0000 Received: (qmail 32295 invoked by uid 500); 29 Nov 2009 00:09:38 -0000 Delivered-To: apmail-incubator-jspwiki-dev-archive@incubator.apache.org Received: (qmail 32247 invoked by uid 500); 29 Nov 2009 00:09:38 -0000 Mailing-List: contact jspwiki-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jspwiki-dev@incubator.apache.org Delivered-To: mailing list jspwiki-dev@incubator.apache.org Received: (qmail 32221 invoked by uid 99); 29 Nov 2009 00:09:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 29 Nov 2009 00:09:38 +0000 X-ASF-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of Janne.Jalkanen@ecyrd.com designates 193.64.5.122 as permitted sender) Received: from [193.64.5.122] (HELO mail.ecyrd.com) (193.64.5.122) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 29 Nov 2009 00:09:36 +0000 Received: from [192.168.0.3] (cs181005170.pp.htv.fi [82.181.5.170]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by mail.ecyrd.com (Postfix) with ESMTPSA id F147597C1C8 for ; Sun, 29 Nov 2009 02:09:14 +0200 (EET) Message-Id: From: Janne Jalkanen To: jspwiki-dev@incubator.apache.org In-Reply-To: Content-Type: text/plain; charset=US-ASCII; format=flowed; delsp=yes Content-Transfer-Encoding: 7bit Mime-Version: 1.0 (Apple Message framework v936) Subject: Re: Current tests slowdowns analysis Date: Sun, 29 Nov 2009 02:09:14 +0200 References: <2D3C5451-391B-485F-8929-4C9277C6ED87@ecyrd.com> X-Mailer: Apple Mail (2.936) I've been thinking about moving to JUnit4, but so far there hasn't really been a compelling reason. This could be it... One possible way could of course to be to override the Stripes default implementation for tests. That would make it fairly fast. /Janne On Nov 29, 2009, at 01:22 , Andrew Jaquith wrote: > Looks like if we used JUnit 4, we could use methods annotated with > @BeforeClass to set up fixtures that would persist between tests in a > single test class. We'd get a lot of savings in classes like > JSPWikiMarkupParserTest that have 200 tests, for example. > > Also, I've investigated (a little) methods for reducing the need to > run ResolverUtil. We would probably still need to run it at least > twice: once for all of JSPWiki's needs, and once for Stripes itself. > But we'd need some way of registering "interest" in particular classes > to discover, so that it could all be done in one pass. I wrote the > class org.apache.wiki.ui.stripes.IsOneOf exactly for this purpose > (match against multiple target classes), but it's not optimized for > speed yet. > > Andrew > > On Sat, Nov 28, 2009 at 5:52 PM, Andrew Jaquith > wrote: >> Janne, I think your attachment got stripped out. Can you re-send >> (maybe directly?) >> >> I agree that we ought to figure out some way of using some sort of >> singleton (or singleton-per-wikiengine) object to stash the results >> of >> findImplementations(). Not sure how this would work with JUnit, >> though >> -- I should do some research. What we'd need is the ability to create >> test fixture objects that persist across the entire run... >> >> Andrew >> >> On Sat, Nov 28, 2009 at 4:48 PM, Janne Jalkanen >> wrote: >>> Folks, here's a screenshot from JProfiler. This should explain >>> why our >>> tests are fairly slow... >>> >>> >>> >>> >>> >>> Simply put; we're not using EhCache, and also we're calling Stripes >>> ResolverUtil.findImplementations twice per WikiEngine startup. So >>> it might >>> make sense to move findImplementations() calls into a singleton or >>> something. But I'm not too sure whether it makes sense >>> considering restarts >>> - or perhaps restarts should clean away the singleton cache? >>> >>> (This is after about 700 tests were run; I didn't want to wait >>> until they >>> had all finished, since it had already taken about two hours with >>> profiling >>> on...) >>> >>> Priha can be seen taking quite a lot of time as well, but that's >>> because it >>> needs to hit the disk all the time. More optimization for >>> FileProvider is >>> needed, but partly it's also because we're not caching anything. >>> >>> /Janne >>> >>