Return-Path: X-Original-To: apmail-incubator-ooo-dev-archive@minotaur.apache.org Delivered-To: apmail-incubator-ooo-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 0AE34C107 for ; Sat, 7 Jul 2012 00:09:56 +0000 (UTC) Received: (qmail 38630 invoked by uid 500); 7 Jul 2012 00:09:55 -0000 Delivered-To: apmail-incubator-ooo-dev-archive@incubator.apache.org Received: (qmail 38541 invoked by uid 500); 7 Jul 2012 00:09:55 -0000 Mailing-List: contact ooo-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: ooo-dev@incubator.apache.org Delivered-To: mailing list ooo-dev@incubator.apache.org Received: (qmail 38525 invoked by uid 99); 7 Jul 2012 00:09:55 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 07 Jul 2012 00:09:55 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of liuyixuan.527@gmail.com designates 209.85.214.175 as permitted sender) Received: from [209.85.214.175] (HELO mail-ob0-f175.google.com) (209.85.214.175) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 07 Jul 2012 00:09:47 +0000 Received: by obcva7 with SMTP id va7so15499751obc.6 for ; Fri, 06 Jul 2012 17:09:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=hxu28cMhUQFs2ziCv7zBFTI/n0Uug2JXlEgixahriUc=; b=E4kES5EhyDl9Fh3VXiV5pqEvu50kJgzyTUGUl95dZWYswrBRJ85/qridHGGiCkbL2w jE5nrpsnerZNeKFTSJvHn+Ip4Z10cTEbHRTFXYwaz4pzRGbtRLIPT+PcY4TXBsnbZ8xo 9heWMqa743hmO2mn3QNGq5MAjom4Lk0xjPWOzX2gP38eGxPT9TsW7cv00/U3G3kKFKkz Y67eHKnuoCYOKKHsWwVsnO21hnCNzex1nqcunOSoGiYPBlXEARh3PvDlt6URd5dXd1Yw +l1Z55M72VlRtAp4vCH4Vb1j4yY/Zi7XeWEXx0uJQV7AN2c8pp867yciMsluN/g+WHO9 w2Yw== MIME-Version: 1.0 Received: by 10.182.75.100 with SMTP id b4mr27994739obw.12.1341619766107; Fri, 06 Jul 2012 17:09:26 -0700 (PDT) Received: by 10.76.126.209 with HTTP; Fri, 6 Jul 2012 17:09:26 -0700 (PDT) In-Reply-To: References: Date: Sat, 7 Jul 2012 08:09:26 +0800 Message-ID: Subject: Re: [QA] AOO 3.4.1 Performance Verification Test (PVT) Introduction From: Yi Xuan Liu To: ooo-dev@incubator.apache.org Content-Type: multipart/alternative; boundary=14dae93996ed951c7804c43231d2 --14dae93996ed951c7804c43231d2 Content-Type: text/plain; charset=ISO-8859-1 Rob, thanks for your reply. My answer to your question is in below. 1) Not yet. I'll check in code in next week ASAP. 2) It takes about 2 hours to complete the whole text on my W500 laptop. ( CPU:2.53 GHz; Mem: 3GB; OS: XP SP3). So, if running 100 times, it would cost about 1 day. Yes, it is also a good method to check Memory leak :) 3) The order *b *is 4) No, don't restart OpenOffice during the whole PVT. Also thanks for your advice about the trick mentioned. I'll try it. On Sat, Jul 7, 2012 at 1:28 AM, Rob Weir wrote: > On Thu, Jul 5, 2012 at 11:11 PM, zhangjf wrote: > > The right url is > > > http://wiki.services.openoffice.org/wiki/Performance/AOO3.4.1_PVT_Introduction > > > > Great. Thanks! > > A few questions: > > 1) Are the scripts and test documents checked into SVN? > > 2) How long does it take for a complete run of the tests? > > 3) What is the order of the tests? For example, are you doing: > > a) document 1 run 1, document 1 run 2, document 1 run 3...document 1 > run 8, document 2 run 1...document 2 run 8, etc. > > or > > b) document 1 run 1, document 2 run 1, document 3 run 1...document N > run 1, then document 1 run 2, document 2 run 2, etc. > > > 4) Within the test do you restart OpenOffice? If so, do you restart > after every document? Or every measurement? > > > If it is at all possible to take more measurements, I think we would > get more high quality results. Right now, you take 8 measurements and > throw away 3 of them (first run, highest time and lowest time). That > throws away information and biases the results because the first run > is probably also slower, so you toss out the two slowest runs but only > toss out the single fastest run. But the fastest run is probably also > the most accurate one, since there are many things in a test that can > accidentally slow things down, but almost nothing can happen to make a > test run faster. (assuming the test logic is accurate). > > In general, in an experiment, keep all the data you have, and get more > accurate results by doing more repetitions. > > For example, what if we did 100 iterations of each test? How long > would that take? > > If we did that, it would have some benefits: > > 1) We wouldn't need to worry about tossing out high and low values. > Our error bounds would be good because of the number of runs we have. > The impact of any one anomalous measurement will be much smaller. > > 2) We could at the same time look at the trend of the measurement over > the test run. For example, compare the average of the first 10% of > the runs with the average of the last 10%. Is there a difference? If > a test slows down over time that might indicate a memory leak or > other problem. You will never find this with only 8 measurements. > > 3) It would tell us the distribution of timings, as well as the average. > > (Another trick. If you are going to do N load measurements of the > same document, maybe start the test run by creating N identical copies > of the same document on disk. Then load each copy only once. That > helps even out the disk cache and I/O environment compared to loading > the exact same file N times) > > -Rob > > > > On Fri, Jul 6, 2012 at 11:07 AM, Yi Xuan Liu > wrote: > >> Hi, all: > >> > >> I wrote a wiki > >> > http://wiki.services.openoffice.org/wiki/Performance/AOO3.4.1_PVT_Introductionabout > >> PVT project in AOO 3.4.1. > >> Any comment is welcomed! > --14dae93996ed951c7804c43231d2--