cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sowmya Krishnan <sowmya.krish...@citrix.com>
Subject RE: New Components on JIRA
Date Sun, 28 Jul 2013 16:05:25 GMT
Agreed, test issues showing up in blockers/critical is confusing. 
I guess, for product issues found through automated tests, we could use Issue type = Bug and
component = Automation in addition to the actual component where the bug is found.


> -----Original Message-----
> From: Prasanna Santhanam [mailto:tsp@apache.org]
> Sent: Sunday, July 28, 2013 5:23 PM
> To: dev@cloudstack.apache.org
> Subject: Re: New Components on JIRA
> 
> On Wed, Jul 24, 2013 at 10:15:12AM +0000, Ram Ganesh wrote:
> > > >
> > > > Prasanna,
> > > >
> > > > How about  - automation-product and automation-script components?
> > > > Automation-product for all product bugs discovered by the
> > > > automation engine and automation-script for all automation script issues?
> > > >
> > >
> > > Right now - that distinction is not clear at least from the bug
> > > reports. We're reusing the same report for both script and product
> > > failure. So anything filed from an automated test failure should
> > > just be automation and on further analysis if it is found to be
> > > product failure, a clearer bug report would be necessary within the
> > > right component of the product - api, network, systemvm etc.
> > >
> >
> > Yes you are right. Component is not the right field. Maybe we could
> > use Label. A JIRA report which reports product issues discovered out
> > of automation will be very valuable. Maybe label is a field for that.
> >
> 
> So I noticed there is are a few issue types in JIRA - Bug, Improvement, Task and
> Test. We should use 'Test' for any thing related to automated tests. This will help
> the RMs filter out bugs that are not entirely product failures but may be
> missing/failing tests. Right now having automated tests show up in blocker and
> critical lists is a little confusing.
> 
> I'm not really sure how we can track product issues caught by automated tests
> other than simple labelling may be? Would be a good report to have to reveal
> our coverage and strengthen the suites.
> 
> The lifecycle of a bug as I see it is something like this
> 
> Test fails on jenkins
> \
>   --> Test failure filed under issue type 'Test'
>    \
>      --> If cloudstack issue, bug report filed with repro steps and logs
>     |
>      --> OR Test fixed appropriately, reviewed and pushed
> 
> Similarly, for any bug filed,
> 
> Bug found in product
> \
>   --> Identify if missing from tests and is automatable
>   \
>     --> Add a 'Test' issue for addressing the missing test
>     \
>       --> Push test to repo and add it to jenkins runs
> 
> These may be [sic]'noble' goals, but it would be great if we collectively start the
> effort of identifying areas that can be covered using tests.
> 
> --
> Prasanna.,
> 
> ------------------------
> Powered by BigRock.com


Mime
View raw message