harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Hindess <mark.hind...@googlemail.com>
Subject Re: [performance] The DaCapo benchmark suite
Date Wed, 03 Feb 2010 11:17:20 GMT

In message <4B68ECC1.4010604@anu.edu.au>, Robin Garner writes:
>
> Mark Hindess wrote:
>
> > I decided to have a play with the dacapo-9.12-bach benchmark suite.
> > I've appended some preliminary results below.
> >
> > I attempted to run all the benchmarks 10 times on the linux/x86_64
> > milestone releases.  For this run I used:
> > 
> >   java -Xms128M -Xmx1024M -showversion -jar $DACAPO_JAR $BENCHMARK
> > 
> > That is, I used the default arguments to dacapo.
> 
> Another statistic that might be of particular interest is to time
> the nth iteration of the benchmark (eg with the dacapo argument "-n
> 3" or "-n 10").  The performance of the first iteration includes a
> large contribution from the compiler - later iterations more directly
> compare the generated code.

After posting the last set of results, I started another run with
5.0M12a, trunk, RI 5.0 and RI 6.0 using the '-C' option[0] to try to get
a set of results with the compiler overhead reduced.  I assumed that
was a better approach than trying to guess a suitable value for the -n
option.

> If you do time later iterations, beware that some benchmarks on some
> JVMs are performance-unstable - there can be performance fluctuations
> between subsequent runs, and the best strategy might be to run 20
> iterations, take the best time of the last 10 iterations, and report
> the mean of 10 invocations.
>
> I'd be very interested (as a member of the dacapo group) if you see
> performance instability with drlvm.

Hopefully there isn't so much variation that my current run doesn't
complete.  Assuming it does (or when I kill it), I will re-run it
recording the timings as you suggest and report the results.

Many thanks for your input.

Regards,
 Mark.

[0] For those not familiar with the options:

   -C,--converge            Allow benchmark times to converge before timing
   -n,--iterations <iter>   Run the benchmark <iter> times



Mime
View raw message