river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Patricia Shanahan <p...@acm.org>
Subject Benchmark organization
Date Tue, 22 Feb 2011 00:37:07 GMT
I want to get going on some performance tuning, but believe it is best 
guided and controlled by well-organized benchmarks. To that end, I 
propose adding a place for benchmarks to the River structure.

We will need several categories of benchmark code:

1. System level benchmarks. These benchmarks measure some public 
features, such as the outrigger JavaSpace implementation. For these, I 
think a similar structure to QA may be best. However, I need to 
understand how the QA harness links together clients and servers, and 
whether it has any special performance implications. We may need, for 
example, to add network delays to properly score implementations that 
involve different amounts of communication.

2. Internal benchmarks. These are more like unit tests, and need to 
mirror the main src package structure so that they can access non-public 
code.

3. Experimental code. In some situations it is useful to do run-offs 
between two or more implementations of the same class. We cannot have 
two classes with the same fully qualified name at the same time, so this 
type of test will need special copies of the classes with modified class 
names or package names. In addition to actually doing the tests and 
picking the implementation to go in the trunk, it is useful to keep 
discarded candidates around. One of them may turn out to be a better 
basis in a future performance campaign.

Thoughts? Alternatives? Comments?

Patricia

Mime
View raw message