jackrabbit-oak-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Thomas Mueller <muel...@adobe.com>
Subject Re: Oak benchmarks (Was: [jr3] Index on randomly distributed data)
Date Fri, 09 Mar 2012 11:09:29 GMT

>The goals as currently defined are too vague
>(what kind of read access patterns, how much data per node, how big a
>cluster, etc.)

I propose the following use cases:

* initial loading (all users need to do at some point; either at once or
in steps)

* iterating over all nodes (indexing, search if there is no index, data
store garbage collection, export, consistency check)

* reading and writing in chunks of 100 nodes (not sure if that's a
realistic pattern)

As for 'real world data' we could use an Adobe CQ installation, or
simulate a similar structure.

>(creating 10 trillion nodes at a rate of one node per millisecond
>takes 31 years).

But only 1.2 days with 10'000 nodes/s and 10'000 cluster instances :-)


View raw message