harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Pavel Ozhdikhin" <pavel.ozhdik...@gmail.com>
Subject Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions
Date Thu, 14 Sep 2006 08:41:10 GMT
Hello Rana,

When I think of an optimization which gives 1% improvement on some simple
workload or 3% improvement on EM64T platforms only I doubt this can be
easily detected with a general-purpose test suite. IMO the performance
regression testing should have a specialized framework and a stable
environment which guarantees no user application can spoil the results.

The right solution might also be a JIT testing framework which would
understand the JIT IRs and check if some code patterns have been optimized
as expected. Such way we can guarantee necessary optimizations are done
independently of the user environment.

Thanks,
Pavel



On 9/14/06, Rana Dasgupta <rdasgupt@gmail.com> wrote:
>
> Hi Egor,
>   An optimization is a functionality that can regress like anything else.
> The functionality is the perf gain, which is the point of the
> optimization.
> How  would any committer confirm that the submitted code does perform the
> optimization  ...other than the developer's word that it is happening, but
> is hard to check? Also, if tomorrow I add some code that undoes your
> optimization, or just breaks your code, how could we detect this? I think
> that we need an associated regression test (at least for significant
> optimizations), that show some guaranteed/minimal perf gain .
> One way to write the test would be to loop N times on a scenario that
> kicks in the optimization say, array bounds check elimination and then
> loop
> N times a very similar scenario but such that the bounds check does not
> get
> eliminated. Then the test should pass only if the difference in timing is
> at
> least X on any platform.
> I have been forced to do this several times :-) So I couldn't resist
> spreading the pain.
>
> Thanks,
> Rana
>
>
>
> > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com> wrote:
> > >
> > >
> > > Weldon, I am afraid, this is a performance issue and the test would
> > > show nothing more than a serious performance boost after the fix. I'll
> > > find someone with a test like this :) and ask to attach it to JIRA.
> > > But .. do we need performance tests in the regression suite?
> > >
> > > Apart of this issue I see that JIT infrastructure is not so
> > > test-oriented as one would expect. JIT tests should sometimes be more
> > > sophisticated than those in vm/tests and, I guess, we need a separate
> > > place for them in the JIT tree.
> > >
> > > Many JIT tests are sensitive to various JIT options and cannot be
> > > reproduced in default mode. For example, to catch a bug in OPT with a
> > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > regression test we will need:
> > > (a) extra options to VM,
> > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > (c) and even *.emconfig files to set custom sequences of optimizations
> > >
> > > (anything else?)
> > > I am afraid, we will have to hack a lot above JUnit to get all these.
> > >
> > > Let's decide whether we need a framework like this at the time. We can
> > > make a first version quite quickly and improve it further on as-needed
> > > basis. Design is not quite clear now, though it is expected to be a
> > > fast-converging discussion.
> > >
> > >
> > > --
> > > Egor Pasko, Intel Managed Runtime Division
> >
> >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message