harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aleksey Shipilev" <aleksey.shipi...@gmail.com>
Subject Re: [drlvm][jitrino][test] Large contribution of reliability test cases for DRLVM+Dacapo
Date Tue, 02 Dec 2008 08:48:14 GMT
Hi, Egor!

I will disclose the methodology a little later. If you're interested
what options were swizzled (I had selected them by hand, m.b. lost
something), please look into one of that emconfs, there're plenty of
options in the end of the file.

Nevertheless there're compiler failures during the swizzling, even if
the configuration produced is bad, the compiler should tell me about,
but not crash voluntary :) If I had those clues during the search, I
would constrain the search with those boundaries, but the compiler
just crashes. So even some failures are right, they need to be
documented and some meaningful message thrown instead of crash. That
way I think every emconf is worth reviewing.

Thanks,
Aleksey.

On Tue, Dec 2, 2008 at 11:21 AM, Egor Pasko <egor.pasko@gmail.com> wrote:
> On the 0x506 day of Apache Harmony Aleksey Shipilev wrote:
>> Hi,
>>
>> I had already done the same thing for JikesRVM [1] and now the time
>> for Harmony has come.
>>
>> As the part of my MSc thesis I had used GA to swizzle the JIT
>> configuration for DRLVM in search of optimal one for running
>> DaCapo/SciMark2
>> benchmarks. While the performance data is re-verified (there are
>> preliminary +10% on some sub-benchmarks, btw), I had parsed the
>> failure logs and this gives me 5.700+ emconfs [2] on which
>> DRLVM/Jitrino is failing.
>>
>> The thing that makes those reports really interesting, is that most of
>> the configurations tested lies near local maxima of performance due to
>> the nature of search. That makes the tests more valuable as they test
>> possible near-optimal configurations.
>>
>> If someone interested in those and wishes to hear more info on the
>> reports, please don't hesitate to ask :)
>> I would eventually elaborate on some of these crashes, but not in the
>> nearest future.
>
> Aleksey, great work! (at least on some sub-benchmarks, btw:)
>
> Generally I am a bit skeptical about the effectieness of performing
> analysis these failures. It would be interesting to read about your
> methodology, i.e. did you put some constraints by hand to avoid
> failures that are expected by design? An example: if you happen to not
> put ssa/dessa in the right place (ssa before optimizations that
> require SSA form, dessa after optimization passes that require no
> SSA), you get a JIT failure.
>
> The sad story is that there are many such "by design" pecularities,
> many undocumented, many hard to discover.
>
> --
> Egor Pasko
>
>

Mime
View raw message