harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Geir Magnusson Jr." <g...@pobox.com>
Subject Re: [general] change the subject line...
Date Fri, 15 Sep 2006 11:50:59 GMT
wimp...

Tim Ellison wrote:
> at least, or better yet move to a new thread.  This conversation has
> moved way beyond discussing the results of a contribution vote.
> 
> For the poor people who have hundreds of mails to read ;-)
> 
> Tim
> 
> Pavel Ozhdikhin wrote:
>> Rana,
>>
>> To cover most of the performance regressions ideally we might have three
>> types of tests:
>>
>>   1. High-level benchmarks (like SPECs or DaCapo benchmarks) - cover
>>   significant performance issues or issues that are not related to a
>>   particular optimization
>>   2. Performance regression tests or micro-benchmarks - a
>>   bytecode-based tests covering one optimization or a small group of them
>>   3. Low-level tests (like VM or JIT unit tests) - tests for
>>   particular code transformations.
>>
>> To my mind the second type of tests is the most sensitive to the
>> environment. In many cases it's difficult to create such test in a
>> reasonable amount of time. I admit that this type of tests might be a
>> short-term solution to check if a fix has been integrated properly but in
>> the long-term we need also 1 and 3 from the list.
>>
>> Thanks,
>> Pavel
>>
>> On 9/14/06, Rana Dasgupta <rdasgupt@gmail.com> wrote:
>>
>>> Hi Pavel,
>>>   Platform specific optimizations can be accomodated in the scheme
>>> described by doing a cpuid check in the test and automatically passing it
>>> or
>>> disabling on all other platforms. That shouldn't be too hard.
>>>   I understand that some jit optimizations are deeper and more abstract,
>>> but ultimately the value of the optimization cannot just be the morphing
>>> of
>>> an IR, and the gain cannot be invisible to the user, or the regression
>>> undetectable. If it needs to be part of a sequence to be effective, the
>>> scenario in the test needs to be set up accordingly. It is a little
>>> uncomfortable if a framework does some magic and then comes back and says
>>> "everything is OK".
>>>   Sorry to sound difficult.
>>>
>>> Thanks,
>>> Rana
>>>
>>>
>>>> On 9/14/06, Pavel Ozhdikhin <pavel.ozhdikhin@gmail.com> wrote:
>>>>> Hello Rana,
>>>>>
>>>>> When I think of an optimization which gives 1% improvement on some
>>>>> simple
>>>>> workload or 3% improvement on EM64T platforms only I doubt this
>>> can be
>>>>> easily detected with a general-purpose test suite. IMO the
>>> performance
>>>>> regression testing should have a specialized framework and a stable
>>>>> environment which guarantees no user application can spoil the
>>> results.
>>>>> The right solution might also be a JIT testing framework which would
>>>>> understand the JIT IRs and check if some code patterns have been
>>>>> optimized
>>>>> as expected. Such way we can guarantee necessary optimizations are
>>> done
>>>>> independently of the user environment.
>>>>>
>>>>> Thanks,
>>>>> Pavel
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Mime
View raw message