jackrabbit-oak-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ju...@apache.org
Subject svn commit: r1526222 - /jackrabbit/oak/trunk/oak-run/README.md
Date Wed, 25 Sep 2013 16:29:05 GMT
Author: jukka
Date: Wed Sep 25 16:29:04 2013
New Revision: 1526222

URL: http://svn.apache.org/r1526222
Log:
OAK-641: Improved benchmark tooling

Document how to add new benchmarks

Modified:
    jackrabbit/oak/trunk/oak-run/README.md

Modified: jackrabbit/oak/trunk/oak-run/README.md
URL: http://svn.apache.org/viewvc/jackrabbit/oak/trunk/oak-run/README.md?rev=1526222&r1=1526221&r2=1526222&view=diff
==============================================================================
--- jackrabbit/oak/trunk/oak-run/README.md (original)
+++ jackrabbit/oak/trunk/oak-run/README.md Wed Sep 25 16:29:04 2013
@@ -18,8 +18,13 @@ The following benchmark options (with de
 
     --host localhost   - MongoDB host
     --port 27101       - MongoDB port
+    --mmap <64bit?>    - TarMK memory mapping (the default on 64 bit JVMs)
     --cache 100        - cache size (in MB)
     --wikipedia <file> - Wikipedia dump
+    --runAsAdmin false - Run test as admin session
+    --itemsToRead 1000 - Number of items to read
+    --bgReaders 20     - Number of background readers
+    --report false     - Whether to output intermediate results
 
 These options are passed to the test cases and repository fixtures
 that need them. For example the Wikipedia dump option is needed by the
@@ -97,3 +102,77 @@ They DO NOT directly indicate the kind o
 could expect with (the current state of) Oak. Instead they are
 designed to isolate implementation-level bottlenecks and to help
 measure and profile the performance of specific, isolated features.
+
+How to add a new benchmark
+--------------------------
+
+To add a new test case to this benchmark suite, you'll need to implement
+the `Benchmark` interface and add an instance of the new test to the
+`allBenchmarks` array in the `BenchmarkRunner` class in the
+`org.apache.jackrabbit.oak.benchmark` package.
+
+The best way to implement the `Benchmark` interface is to extend the
+`AbstractTest` base class that takes care of most of the benchmarking
+details. The outline of such a benchmark is:
+
+    class MyTest extends AbstracTest {
+        @Override
+        protected void beforeSuite() throws Exception {
+            // optional, run once before all the iterations,
+            // not included in the performance measurements
+        }
+        @Override
+        protected void beforeTest() throws Exception {
+            // optional, run before runTest() on each iteration,
+            // but not included in the performance measurements
+        }
+        @Override
+        protected void runTest() throws Exception {
+            // required, run repeatedly during the benchmark,
+            // and the time of each iteration is measured.
+            // The ideal execution time of this method is
+            // from a few hundred to a few thousand milliseconds.
+            // Use a loop if the operation you're hoping to measure
+            // is faster than that.
+        }
+        @Override
+        protected void afterTest() throws Exception {
+            // optional, run after runTest() on each iteration,
+            // but not included in the performance measurements
+        }
+        @Override
+        protected void afterSuite() throws Exception {
+            // optional, run once after all the iterations,
+            // not included in the performance measurements
+        }
+    }
+
+The rough outline of how the benchmark will be run is:
+
+    test.beforeSuite();
+    for (...) {
+        test.beforeTest();
+        recordStartTime();
+        test.runTest();
+        recordEndTime();
+        test.afterTest();
+    }
+    test.afterSuite();
+
+You can use the `loginWriter()` and `loginReader()` methods to create admin
+and anonymous sessions. There's no need to logout those sessions (unless doing
+so is relevant to the benchmark) as they will automatically be closed after
+the benchmark is completed and the `afterSuite()` method has been called.
+
+Similarly, you can use the `addBackgroundJob(Runnable)` method to add
+background tasks that will be run concurrently while the main benchmark is
+executing. The relevant background thread works like this:
+
+    while (running) {
+        runnable.run();
+        Thread.yield();
+    }
+
+As you can see, the `run()` method of the background task gets invoked
+repeatedly. Such threads will automatically close once all test iterations
+are done, before the `afterSuite()` method is called.



Mime
View raw message