harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mikhail Loenko" <mloe...@gmail.com>
Subject Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)
Date Thu, 14 Sep 2006 11:33:10 GMT
In the example i've mentioned before the difference between optimized and
non-optimized calls was about 1000x. But the test sometimes failed anyway

Thanks,
Mikhail

14 Sep 2006 17:59:44 +0700, Egor Pasko <egor.pasko@gmail.com>:
> On the 0x1E4 day of Apache Harmony Pavel Ozhdikhin wrote:
> > When I think of an optimization which gives 1% improvement on some simple
> > workload or 3% improvement on EM64T platforms only I doubt this can be
> > easily detected with a general-purpose test suite. IMO the performance
> > regression testing should have a specialized framework and a stable
> > environment which guarantees no user application can spoil the results.
> >
> > The right solution might also be a JIT testing framework which would
> > understand the JIT IRs and check if some code patterns have been optimized
> > as expected. Such way we can guarantee necessary optimizations are done
> > independently of the user environment.
>
> Pavel, Rana,
>
> Sometimes a performance issue is well reproduced with a microbenchmark
> on all platforms. Basically, you can compare execution times with
> some_optpass=on and some_optpass=off. If the difference is less than,
> say, 20%, the test fails. In this case, it is easier to write a test
> like this and not stick to IR-level testing.
>
> Sometimes, a performance issue is more sophisticated and we need an
> IR-oriented test.
>
> I would vote for having *both* kinds of tests in JIT regression testbase.
>
> P.S.: Are we out of ideas and it's time to implement something?
>
> > On 9/14/06, Mikhail Loenko <mloenko@gmail.com> wrote:
> > >
> > > Hi Rana
> > >
> > > 2006/9/14, Rana Dasgupta <rdasgupt@gmail.com>:
> > > <SNIP>
> > > >  One way to write the test would be to loop N times on a scenario that
> > > > kicks in the optimization say, array bounds check elimination and then
> > > loop
> > > > N times a very similar scenario but such that the bounds check does not
> > > get
> > > > eliminated. Then the test should pass only if the difference in timing
> > > is at
> > > > least X on any platform.
> > >
> > > I tried to create a similar test when was testing that resolved IP
> > > addresses are
> > > cached. Finally I've figured out that this test is not the best
> > > pre-commit test as it
> > > may accidentally fail if I run other apps on the same machine where I
> > > run the tests.
> > >
> > > And as you know unstable failure is not the most pleasant thing to deal
> > > with :)
> > >
> > > Thanks,
> > > Mikhail
> > >
> > >
> > > >  I have been forced to do this several times :-) So I couldn't resist
> > > > spreading the pain.
> > > >
> > > > Thanks,
> > > > Rana
> > > >
> > > >
> > > >
> > > > > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com>
> > > wrote:
> > > > > >
> > > > > >
> > > > > > Weldon, I am afraid, this is a performance issue and the test
would
> > > > > > show nothing more than a serious performance boost after the
fix.
> > > I'll
> > > > > > find someone with a test like this :) and ask to attach it to
JIRA.
> > > > > > But .. do we need performance tests in the regression suite?
> > > > > >
> > > > > > Apart of this issue I see that JIT infrastructure is not so
> > > > > > test-oriented as one would expect. JIT tests should sometimes
be
> > > more
> > > > > > sophisticated than those in vm/tests and, I guess, we need a
> > > separate
> > > > > > place for them in the JIT tree.
> > > > > >
> > > > > > Many JIT tests are sensitive to various JIT options and cannot
be
> > > > > > reproduced in default mode. For example, to catch a bug in OPT
with
> > > a
> > > > > > small test you will have to provide "-Xem opt" options. Thus,
in a
> > > > > > regression test we will need:
> > > > > > (a) extra options to VM,
> > > > > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > > > > (c) and even *.emconfig files to set custom sequences of
> > > optimizations
> > > > > >
> > > > > > (anything else?)
> > > > > > I am afraid, we will have to hack a lot above JUnit to get all
> > > these.
> > > > > >
> > > > > > Let's decide whether we need a framework like this at the time.
We
> > > can
> > > > > > make a first version quite quickly and improve it further on
> > > as-needed
> > > > > > basis. Design is not quite clear now, though it is expected
to be a
> > > > > > fast-converging discussion.
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Egor Pasko, Intel Managed Runtime Division
> > > > >
> > > > >
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
> --
> Egor Pasko, Intel Managed Runtime Division
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Mime
View raw message