jackrabbit-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ard Schrijvers <a.schrijv...@onehippo.com>
Subject Re: Jackrabbit performance data
Date Mon, 09 Aug 2010 13:53:10 GMT
Hello,

On Wed, Aug 4, 2010 at 11:21 AM, Jukka Zitting <jukka.zitting@gmail.com> wrote:
>
> To add new performance tests that measure features you're interested
> in, please submit a patch that adds a new subclass of the
> o.a.j.benchmark.PerformanceTest class. The code whose performance is
> to be measured should be placed in the runTest() method and any
> setup/cleanup code required by the test should go to the
> before/afterTest() and before/afterSuite() methods (executed
> respectively before and after each test iteration and the entire test
> suite). The runTest() method should normally not last more than a few
> seconds, as the test suite will run the test multiple times over about
> a minute and compute the average running time (plus other statistics)
> of the test for better reliability. The first few runs of the test are
> ignored to prevent things like cache warmup from affecting the test
> results.

First of all, thanks a lot for this Jukka. I really like it. Would you
have an idea how we could measure performance for larger repositories.
For example, I would be glad to add some query performance tests, but,
obviously, querying can be very sensitive to the number of nodes. I
would be interested in the performance of some queries (of xpath, sql
and qom) against different repository version, but then specifically
queries against large repositories. I understand if it is not feasible
because the tests would take to long. WDYT?

Regards Ard

>
> [1] https://issues.apache.org/jira/browse/JCR-2695
> [2] http://svn.apache.org/repos/asf/jackrabbit/commons/jcr-benchmark/trunk/
>
> BR,
>
> Jukka Zitting
>

Mime
View raw message