incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From David Nalley <da...@gnsa.us>
Subject Re: [ACS41][QA] Test plan for List API Performance Optimization
Date Mon, 04 Mar 2013 15:31:01 GMT
Inline reply.

On Thu, Feb 28, 2013 at 2:09 AM, Sowmya Krishnan
<sowmya.krishnan@citrix.com> wrote:
> Thanks for taking time to review the plan David. Answers inline.
>
>> -----Original Message-----
>> From: David Nalley [mailto:david@gnsa.us]
>> Sent: Thursday, February 28, 2013 3:26 AM
>> To: cloudstack-dev@incubator.apache.org
>> Subject: Re: [ACS41][QA] Test plan for List API Performance Optimization
>>
>> On Fri, Feb 22, 2013 at 1:24 PM, Sowmya Krishnan
>> <sowmya.krishnan@citrix.com> wrote:
>> > Hi,
>> >
>> > I've posted a test plan for tracking the performance numbers for the
>> > set of List APIs which were optimized as mentioned in
>> > https://issues.apache.org/jira/browse/CLOUDSTACK-527
>> > Test plan is here:
>> > https://cwiki.apache.org/confluence/display/CLOUDSTACK/List+API+Perfor
>> > mance+Test+Plan
>> >
>> > Please take a look and post comments if any.
>> >
>>
>>
>> Thanks for writing this up, I have a couple of questions for you.
>>
>> I understand that you are running these tests and recording performance, but it
>> seems like you are measuring time. Is this time from query leaving the client to
>> answer? Is the client on the management server or not?
>>
> Yes. Time measured from the instant the query is fired from the client till the complete
response has been received. Client is not in the management server. I'll fire queries from
a different server than MS or the DB.
>
>> I assume you are going to use the simulator, and not just have a populated DB?
>> (If that isn't the case, perhaps you can share the db
>> dump.)
>>
> Plan is to use simulator to create Hosts, Routers, VMs etc... to generate the load on
DB and management server rather than populate DB.
> Unless, at a later stage, there's a need to run much higher loads than what I've mentioned
in the test plan and if it's beyond what my test servers can sustain... Then I might switch
to using a DB dump. But I don't foresee this for now.
>
>> Are you going to take a baseline from 4.0.{0,1}?
>>
> I have some numbers for List APIs pre-API optimization. I'll use those as baseline.
>
>> Can this test be written up as a script and generate these statistics as we get
>> near a release to ensure we don't regress?
>>
>
> Sure. I already have some scripts for generating load. I'll write few more scripts to
track time taken for List APIs.
>

Great - perhaps we can talk with Prasanna about getting those
completely automated with jenkins.

>> Are we assuming there will be no slow/long running queries? If there are, it
>> might be interesting to see what those are and if there are database issues we
>> can further work on?
>>
> I usually have the DB log slow queries. I can publish those too as part of the results.
>
>> What is 'failure' of this test? (slower than 4.0.x?, slower than n-percent-faster
>> than 4.0.x?)
>>
> I have some numbers recorded for few List APIs before the API re-factoring was done.
I'll take those as baseline and call out failures based on that for a start. Going forward,
I'll try to automate the regressions so that we catch issues due to regressions if any.


Can we get the baseline numbers published somewhere? (as well as the
numbers you get in your tests of 4.1?

--David

Mime
View raw message