harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mikhail Fursov" <mike.fur...@gmail.com>
Subject [drlvm][jit] Internal testing framework for Jitrino.OPT compiler
Date Thu, 21 Sep 2006 14:34:01 GMT
All,

This is the proposal to start the discussion and implementation of
Jitrino.OPT compiler internal testing framework.
There are a lot of optimizations in Jitrino with a variety of options but
only minor part of them is used and tested in the default configuration.
The result is when adding new features it's very easy to make a change that
conflict with one of the unused modes.
Another example is optimization interdependencies. Fixing a bug for a one IR
(intermediate compiler representation of user's code) we can turn the
optimization off for another IR. Once we have most of such situations
encoded as separate reliability tests we can run the check and found all
collisions.

My vision that the most precise IR optimizations tests should check the IR
to prove that optimization really does what it is expected to do.
The tests for separate optimizations and functions inside the Jitrino
compiler will also require a throughout documentation and understanding of
the code.

I put the initial implementation of the testing framework code to JIRA and
hope that the experience of JIT and QA gurus and volunteers will help us to
make this subproject successful.

Here is the small description of the testing framework (the way how I
understand it). Hope to hear your suggestions here.

1)  Every test (set of tests) is executed in a separate JVM instance. The VM
instance is run by special adapter that is integrated into a common
ant+junit framework.
2)  The compilation of the special 'marker' method triggers the compiler to
execute a test. The trigger code is placed to the action called
"test_runner" in the sources
3)  The test_runner action gets the configuration from the cmd-line
parameters or Java properties.
4)  The test_runner action selects the test (TestCase) from the test
registry (TestRegistry), prepares the environment and runs it.
5)  The environment of the test is IR and is called IRTemplate.
6)  Once test is passed the test_runner reports "PASSED/FAILED+details" to
output. The output  is processed by JVM runner (1)

Any comments/volunteers to help me with this task? There are lot of
algorithms in JIT have never been tested this way, so the tasks looks very
interesting.


+ See JIRA [ http://issues.apache.org/jira/browse/HARMONY-1531 ] for the
sample code.


-- 
Mikhail Fursov

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message