jackrabbit-oak-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jukka Zitting <jukka.zitt...@gmail.com>
Subject Re: Oak benchmarks (Was: [jr3] Index on randomly distributed data)
Date Fri, 09 Mar 2012 10:31:05 GMT

On Thu, Mar 8, 2012 at 5:14 PM, Thomas Mueller <mueller@adobe.com> wrote:
>>To start with, I'd target the following basic deployment configurations:
>>* 1 node, MB-range test sets (small embedded or development/testing
>>* 4 nodes, GB-range test sets (mid-size non-cloud deployment)
>>* 16 nodes, TB-range test sets (low-end cloud deployment)
> I interpret the goals we defined at [1] as:
> * read throughput: no degradation from current Jackrabbit 2
> * single repository (without clustering): 100 million nodes
> * cluster: 10 trillion nodes

Yep, a big part in defining actual benchmarks is that such goals can
be made more concrete. The goals as currently defined are too vague
(what kind of read access patterns, how much data per node, how big a
cluster, etc.) and perhaps not too well grounded in actual use cases
(creating 10 trillion nodes at a rate of one node per millisecond
takes 31 years).

Thus what I'd like to see here are ideas for more specific benchmarks
that model some real world use cases and deployments that we expect
the repository to be able to support.


Jukka Zitting

View raw message