harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Elford, Chris L" <chris.l.elf...@intel.com>
Subject RE: [performance] a few early benchmarks
Date Mon, 27 Nov 2006 21:53:41 GMT
Stefano Mazzocchi writes:
>> ...
>> Then you have to realize that not everybody has the $$$ or the
intention
>> to spend money to help us optimize Harmony. 

Your point that there is a $$$ barrier to entry for utilizing SPEC
workloads is a good one.  I wonder if SPEC would consider some sort of
grant to Apache to use their workloads with Harmony...  The SPEC site
(http://www.spec.org/order.html) seems to read as if a single Apache
license might cover all Harmony developers...

>> ...
>> Our main and only objective is to please our users, and reducing the
>> barrier for them to become developers as much as we possibly can. 

I agree wholeheartedly with the first part of this sentence regarding
pleasing our users.  For the next 9 months or so, I'd also agree with
the second part of this sentence in that lowering the barrier to
developer entry is strongly desirable.  Your point is good in that for
now, we need to do whatever we can to make sure that key workloads
(including performance workloads) are available to our developers.  In
the longer term, I would hope that we have more users than developers
and that developers will pay attention to performance [and functional]
bugs filed against Harmony on behalf of applications for which the
source and/or all executable bits are unavailable.

>> ...
>> But benchmarks have two objectives:
>> show the current performance status *and* allow potential developers
to
>> test if their changes have made a positive impact before submitting
them.

A benchmark is created to stress some combination of technologies.
Benchmarks tend to be distilled from some set of real world
requirements.  As someone who worked for years with industy consortia
for standardizing benchmarks, I would argue that benchmarks have more
than the two objectives you mention.  To name a few: attracting new
users, showcasing your "product", providing a platform for level playing
field comparisons with alternate implementations, allowing demonstration
of technology readiness, driving improvements in key portions of the
technology stack, ....

To showcase Harmony as a performance competitive JRE with technology
readiness for client and enterprise class deployments, I believe that we
will need to agree to underscore some combination of academic benchmarks
(such as Dacapo memory management benchmarks), domain benchmarks (such
as SciMark, XMLbench, etc.) and more general standardized industry
benchmarks (such as SPEC and/or TPC).  Of course we all have our own
biases but I personally don't think that Harmony will be successful
without some due diligence applied to the latter and I believe that
Harmony needs to find some way of working with some these benchmarks
(e.g., SPECjbbXXXX, SPECjvmXXXX, SPECjAppServerXXXX, TPC-App).  

I assume that we will have contributors interested in different
combinations of these benchmarks.  Harmony needs to create some guiding
principles as to how the design/implementation will accommodate these
varied performance interests while moving Harmony toward an ever better
performing platform for our users.  Optimizing for performance tends to
be at odds with optimizing for portability and/or for maintainability in
that it tends to involve fast path exceptions to generalized code.  

Regards,

Chris Elford
Intel SSG/Enterprise Solutions Software Division

-----Original Message-----
From: Stefano Mazzocchi [mailto:stefano@apache.org] 
Sent: Monday, November 27, 2006 9:28 AM
To: dev@harmony.apache.org
Subject: Re: [performance] a few early benchmarks

Mikhail Loenko wrote:
> 2006/11/27, Stefano Mazzocchi <stefano@apache.org>:
>> Mikhail Loenko wrote:
>>
>> > Not sure that when we elect an official benchmark we should take
>> > into account whether it's open or not.
>>
>> You're kidding, right?
>
> no. not at all

Then you have to realize that not everybody has the $$$ or the intention
to spend money to help us optimize Harmony. No, worse: such carelessness
for open participation is poisonous to the creation a more diverse and
distributed development community.

This is not an closed-development project anymore, the rules have
changed: we need to think in terms of lowering participation obstacles,
not in terms of pleasing our upper management or our potential
customers.

Our main and only objective is to please our users, and reducing the
barrier for them to become developers as much as we possibly can. And
that sometimes includes making compromises with our own personal (or
job-related) interests.

If you (or anybody else around you) care about those, you are more than
welcome to continue on that path, submit code changes, results and
ideas... but pretending that an entire community bases its performance
results on something potential developers would have to pay to get
access to is equivalent of destroying the ability for
non-corporate-sponsored individuals to help in that space.

And, I'm sorry, but such discrimination won't make me (and a lot of
people like me) happy.

Let me be crystal clear: you are and will continue to be *more than
welcome* to benchmark with anything you (or anybody around you) cares
about and work with those results. But benchmarks have two objectives:
show the current performance status *and* allow potential developers to
test if their changes have made a positive impact before submitting
them.

Now that we are an open development project, we must learn,
collectively, to think in terms of 'what can I do to acquire more
development participation', not only in terms of 'what can I do to
acquire more users'.

DaCapo fits both goals very well. Spec fits the first, partially.

And, finally, I don't care if Spec it's the industry standard: if we
were driven by following what everybody else was doing, there would be
no Harmony in the first place.

-- 
Stefano.

Mime
View raw message