river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From MICHAEL MCGRADY <mmcgr...@topiatechnology.com>
Subject Re: Benchmark organization
Date Tue, 22 Feb 2011 22:00:50 GMT
Does anyone have access to NetSim for Apache or for River?  NetSim and like applications are
all that.  



MG


On Feb 22, 2011, at 1:32 PM, Gregg Wonderly wrote:

> On 2/22/2011 12:05 PM, Patricia Shanahan wrote:
>> On 2/22/2011 12:16 AM, Peter Firmstone wrote:
>>> Patricia Shanahan wrote:
>>>> I want to get going on some performance tuning, but believe it is best
>>>> guided and controlled by well-organized benchmarks. To that end, I
>>>> propose adding a place for benchmarks to the River structure.
>>>> 
>>>> We will need several categories of benchmark code:
>>>> 
>>>> 1. System level benchmarks. These benchmarks measure some public
>>>> features, such as the outrigger JavaSpace implementation. For these, I
>>>> think a similar structure to QA may be best. However, I need to
>>>> understand how the QA harness links together clients and servers, and
>>>> whether it has any special performance implications. We may need, for
>>>> example, to add network delays to properly score implementations that
>>>> involve different amounts of communication.
>>>> 
>>>> 2. Internal benchmarks. These are more like unit tests, and need to
>>>> mirror the main src package structure so that they can access
>>>> non-public code.
>>>> 
>>>> 3. Experimental code. In some situations it is useful to do run-offs
>>>> between two or more implementations of the same class. We cannot have
>>>> two classes with the same fully qualified name at the same time, so
>>>> this type of test will need special copies of the classes with
>>>> modified class names or package names. In addition to actually doing
>>>> the tests and picking the implementation to go in the trunk, it is
>>>> useful to keep discarded candidates around. One of them may turn out
>>>> to be a better basis in a future performance campaign.
>>>> 
>>>> Thoughts? Alternatives? Comments?
>>>> 
>>>> Patricia
>>>> 
>>> +1 to 1 and 2, not sure how to handle 3 - Peter.
>>> 
>>> I wonder if we could have a location for long term experimental code in
>>> skunk?
>>> 
>>> If the experiment into a modular build is successful, (my apologies for
>>> my recent lack of time), we could simply create an experimental module
>>> and compare it against the original.
>>> 
>> 
>> We won't always be able to integrate an experiment with its proper package until
>> after the experiment has been done.
>> 
>> For example, my recent FastList changes involved a change in how a FastList user
>> scans the list, from one based on list.head() and node.next() to making FastList
>> Iterable. I did not do the changes to the rest of outrigger to compile with the
>> new interface until after I had assured myself that at least one Iterable
>> implementation was as fast as the old implementation.
>> 
>> I'm also dubious about doing performance comparisons with different environments
>> for the code being compared. My ideal is a program that can cycle among
>> implementations in a single run. Next best is a program that measures a run-time
>> selected implementation, but with everything except the code under test
>> unchanged. Everything involved must be built with the same compiler version and
>> parameters, so I strongly prefer a single build.
>> 
>> I'm not sure how all these issues would be handled in the modular build
>> environment.
> 
> One thing you might do is use interfaces to bridge the gap.  In particular, for a class,
you can move the class to a newly named class, and make the old class name into an interface
which is then implemented by the new test/experiment class.  This won't work across the board,
so you might then have to use an abstract class if there is a variable reference or other
implementation detail that requires a class instead of an interface.
> 
> Just some thoughts.
> 
> Gregg

Michael McGrady
Chief Architect
Topia Technology, Inc.
Cel 1.253.720.3365
Work 1.253.572.9712 extension 2037
mmcgrady@topiatechnology.com




Mime
View raw message