jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jukka Zitting <jukka.zitt...@gmail.com>
Subject Jackrabbit performance data
Date Wed, 04 Aug 2010 09:21:43 GMT
Hi,

There's been a number of requests and efforts to provide various kinds
of performance figures for different kinds of tasks in Jackrabbit, but
so far we've been lacking a comprehensive test suite that would
provide comparable performance numbers over multiple Jackrabbit
releases. Such a test suite would help us evaluate how Jackrabbit
performance has evolved over time and better detect significant
regressions when they occur.

As you may have seen from JCR-2695 [1], we now have such a performance
test suite! I've uploaded the first set of basic performance plots to
the issue. See the benchmark project in [2] for the exact definition
of the simple test cases I've included so far.

To add new performance tests that measure features you're interested
in, please submit a patch that adds a new subclass of the
o.a.j.benchmark.PerformanceTest class. The code whose performance is
to be measured should be placed in the runTest() method and any
setup/cleanup code required by the test should go to the
before/afterTest() and before/afterSuite() methods (executed
respectively before and after each test iteration and the entire test
suite). The runTest() method should normally not last more than a few
seconds, as the test suite will run the test multiple times over about
a minute and compute the average running time (plus other statistics)
of the test for better reliability. The first few runs of the test are
ignored to prevent things like cache warmup from affecting the test
results.

[1] https://issues.apache.org/jira/browse/JCR-2695
[2] http://svn.apache.org/repos/asf/jackrabbit/commons/jcr-benchmark/trunk/

BR,

Jukka Zitting

Mime
View raw message