harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robin Garner <Robin.Gar...@anu.edu.au>
Subject Re: [arch] Interpreter vs. JIT for Harmony VM
Date Thu, 22 Sep 2005 09:43:11 GMT
acoliver@apache.org wrote:

> In my experience GC tuning often makes a larger difference than fully
> optimized code generation.  Thus anything that doubles our footprint
> will probably tend to be perceptively slower in larger systems under
> load (these things don't seem to be so perceptable with 
> microbenchmarking).
>
> -andy

Garbage collector performance is a space-time tradeoff, and exhibits a 
curve something like an exponential decay.  The difference between a 
1.25x heap (relative to the minimum heap requirement to run the 
application) and a 1.5x heap can be dramatic - but a 5x heap vs 6x heap 
can be a negligible difference.

On the other hand code compiled with an optimizing compiler is (at least 
for the JikesRVM compilers) 7+ times faster than with the baseline compiler.

The cost of compiled code would only double our footprint if the code 
was much larger than the data.  Santiago's results below show 20MB of 
baseline compiled code, leaving the other 80MB increase unexplained. 

> Santiago Gala wrote:
>
>> El mi??, 21-09-2005 a las 08:29 -0700, will pugh escribi??:
>>
>>> I think having a FastJIT and forgoing the interpreter is a pretty 
>>> elegant solution, however, there are a few things that may come out 
>>> of this:
>>>
>>>  1)  Implementing JVMTI will probabaly be more difficult than doing 
>>> a straight interpreter
>>>  2)  The FastJIT needs to be Fast!  Otherwise, you run the risk of 
>>> people not wanting to use it for IDEs and Apps because the startup 
>>> time is too slow.
>>
I would have thought that implementing JVMTI for SlowJIT-ted code would 
have been about as difficult as for the FastJIT-ted code ?  Or are we to 
assume that tools will only be used at the lowest level of optimization ?

>>
>> 3) Memory. A typical fast, non opt JIT will generate 10-15 bytes of
>> machine code *per bytecode*. This means that, say, tomcat plus typical
>> web applications will generate more than 20Megs of jitted code that will
>> be executed just a few times. A fast interpreter+optimizing compiler
>> would achieve similar performance and save most of those 20Megs.
>>
>> I've seen this going on in my efforts to get jetspeed running on top of
>> jikesRVM+classpath (which is leading to a series of bug reports to both
>> projects).
>
I don't think there's any reason why code that never gets executed needs 
to be kept - it should be possible for the VM to revert a baseline 
compiled method to its original uncompiled state, and as long as there's 
no active stack frame executing the method it can be reclaimed.  The 
details are probably a little hairy, of course ...

>>
>> I have it running, in my linux-ppc TiBook, only one problem with
>> ClassLoader.getResource that is proving difficult to solve is remaining
>> for a full success. :)
>>
>> Tomcat+Jetspeed runs (qualitatively) faster using an Optimized JikesRVM
>> +classpath version in my TiBook than using IBM-jdk-1.4.2, but it
>> requires 200 M heap, while IBM jdk runs it in 100 Megs. Also, startup
>> time is about the same or slightly higher, but this is mostly because I
>> don't opt-compile the optimizing compiler itself to save build time.
>>
>> Example output from a typical run:
>>
>>                 Compilation Subsystem Report
>> Comp   #Meths         Time    bcb/ms  mcb/bcb      MCKB    BCKB
>> JNI        35         2.44        NA       NA      15.5      NA
>> Base    26074      8082.06    194.01    10.51   22977.7  2186.7
>> Opt       722     14685.43      2.46     6.76     226.7    33.5
>>
>>
>> Regards
>> Santiago
>>
Are you using the GenMS collector ?

cheers,
Robin

Mime
View raw message