river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gregg Wonderly <gr...@wonderly.org>
Subject Re: Benchmark organization
Date Mon, 28 Feb 2011 16:20:13 GMT
On 2/28/2011 9:54 AM, Patricia Shanahan wrote:
> Gregg Wonderly wrote:
> ...
>> Can you help me understand why multiple builds are not working? It just seems
>> to me that you'd build the existing code "once" to test, and then iterate
>> through your development practices to work on your "Experiment", testing with
>> that output as you go.
> ...
>
> Performance work is a bit different from functional development.

I am not sure.  Your consideration of "will I always want to do it this way", 
for me is just exactly what happens in software development, at all levels. 
Performance is an interesting "consideration", but as you say, factors 
everywhere affect it, so from my perspective, I think it is better to learn from 
your experience then plan for all possibilities.  We need to have benchmarks 
first, and then after that, we might continue to add other benchmarks as we 
learn about failure modes that we didn't know about or understand.  Holding onto 
the old version should be a branching issue.  The SCM should have a record of 
the older version and it is then possible to go get it.  If you want it to be 
visible in the tree, then you need to make it into a "module" or have an 
"interface" to manage multiple implementation's visibility, as we've discussed.

> If two proposed implementations differ in functional correctness or source code
> readability, the one that is better now is going to stay better unless someone
> is working on the other one.
>
> On the other hand, which of two implementation has the better performance can
> change any time we move to a new JVM version, and new versions of associated
> libraries.
>
> One of the consequences of moving to JDK 1.5 is that I expect more and more
> performance critical parallel code to be written in terms of java.util packages.
> That means that code that is rejected for integration because it is slower than
> another version may become better than the integrated version without being
> under development.
>
> I'm looking for a shelf I can put such things on where they will not get lost. I
> would like to retain the ability to do performance run-offs without having to
> maintain different versions of all code that depends on the choice.

I don't think that it's necessary to be super conservative about this.  I know 
that you've not been around this code long, and I and others have varied 
experience with different parts of it.  When I first looked at the FastList 
implementation and read the notes, I was immediately ready to replace it with a 
"fully synchronized implementation" at least.  I just really felt that it was 
much more likely to be completely wrong (as you've found it has issues) with 
it's JMM 'tricks" than I was willing to lose performance for.

> Ideally, it will be part of the River svn structure so that another
> evidence-driver performance person can pick it up even if I'm no longer active
> on the project, and also so that people with access to different environments
> can repeat experiments.

Keeping everything around forever is a difficult way to deal with the moving 
target of "performance optimization".   I really do think that the benchmarks 
are what we should worry about, and when we can't meet a benchmark at a JDK 
upgrade point, we can then look for opportunities of optimization instead of 
spending time on a problem that doesn't exist, yet.

Gregg


Mime
View raw message