On Tue, Jul 3, 2012 at 9:54 AM, Yi Xuan Liu <liuyixuan.527@gmail.com> wrote:
> The reason of removing the first round result is that we found that the
> first round result is much different from other rounds. It is usually
> slower to open the sample file at the first time. So, we could view the
> first round result as an outlier and removed it when computing the average
> and standard deviation. However, the first round is also important for
> user experience. Maybe, we could list the first round performance
> separately.
>
That's what I was wondering as well. If the first run is slow because
you are loading additional software needed to measure the test runs,
then it is fair to discard the first run. But if the first run is
slow because of I/O related to loading the AOO code or the document,
then that is real and part of the real user experience.
> As for the second value, ttest is a good suggestion and I'll research on
> it and use it in the following test. Thanks!
>
Here is a spreadsheet demonstrating the approach I use when doing
performance comparisons:
http://people.apache.org/~robweir/perf/perfcompare.ods
It uses the chart to give a visual indication of whether the measured
differences are within the range of expected variability, or whether
they are are "real".
Rob
>
> On Tue, Jul 3, 2012 at 7:41 PM, Rob Weir <robweir@apache.org> wrote:
>
>> On Tue, Jul 3, 2012 at 6:39 AM, Linyi Li <lilinyi921734@gmail.com> wrote:
>> > From Xuan Xuan's introduction in the beginning, I think the first data is
>> > average value of the test results and the second data is the standard
>> > deviation.
>> >
>>
>> So why skip the numbers for the first round test? Isn't that what
>> real users see, the first round? Sure, it will be slower as code is
>> loaded into memory, files read from disk, etc. But the same thing
>> happens for users.
>>
>> Also, I think the interesting 2nd number is the "standard error of the
>> mean", which == std deviation / sqrt(count of measurements). That is
>> what gives the error bars (confidence interval) on the measurement.
>> For example, 95% confidence limits on a measurement would be:
>>
>> lower bound = mean  1.96*standard_error
>> upper bound = mean + 1.96*standard_error
>>
>> And easy "rule of thumb" is to compare the "before" and "after"
>> measures and see if there is overlap in the intervals.
>>
>> For example:
>>
>> Before interval: (1.0, 2.0)
>> After interval: (1.5, 2.5)
>>
>> Because the intervals overlap, there might not be a significant
>> difference between the two.
>>
>> But:
>>
>> Before interval: (1.0, 2.0)
>> After interval: (2.5, 3.5)
>>
>> In this case there is a clear difference, because the confidence
>> intervals do not overlap.
>>
>> A ttest could also be used here, but the above approach works well in
>> Calc if you use the "stock 2" type chart. This has series for
>> high/low/close/open. So you could do something where the high and low
>> values are 95% confidence intervals. This makes it easy to tell what
>> is important from a glance.
>>
>> Rob
>>
>> >
>> > On Tue, Jul 3, 2012 at 5:36 PM, Ji Yan <yanji.yj@gmail.com> wrote:
>> >
>> >> I'm sure Yi Xuan will update her wiki page with detail test case and the
>> >> meaning of report data
>> >>
>> >> 2012/7/3 Andre Fischer <af@awf.de>
>> >>
>> >> > On 03.07.2012 09:02, Herbert Duerr wrote:
>> >> >
>> >> >> ****
>> >> >>> 
>> >> >>>  Filter  odt Load Show  Plain  0.72/ 0.03 
>> >> >>>    Complex  1.13/ 0.03

>> >> >>> ****
>> >> >>> 
>> >> >>> [...]
>> >> >>> Any comment is welcomed!
>> >> >>>
>> >> >>
>> >> >> Thanks for sharing this interesting data.
>> >> >> I haven't found an explanation what these numbers mean though,
so I
>> have
>> >> >> to guess: The first number is the average value and the second
is the
>> >> >> sigma value for running the test, right?
>> >> >>
>> >> >
>> >> > It would be good to put any explanation/documentation on the wiki
>> page or
>> >> > else the information about the test parameters from the first mail

>> (8
>> >> > runs, average over 5), what is plain or complex  would be lost.
>> >> >
>> >> > Andre
>> >> >
>> >>
>> >>
>> >>
>> >> 
>> >>
>> >>
>> >> Thanks & Best Regards, Yan Ji
>> >>
>> >
>> >
>> >
>> > 
>> > Best wishes.
>> > Linyi Li
>>
