cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daan Hoogland <>
Subject Re: Tiered Quality
Date Thu, 31 Oct 2013 20:12:56 GMT
one note on testing guys,

I see that the analysis site give lines-coverage and branch-coverage.
I don't see anything on distinct paths. What I mean is that the
has eight (2^3) distict paths. It is not enough to show that
A,B,C,D,E,F are all hit and hance every line and branch. Also all
combinations of a/!a and b/!b and c/!c need to be hit.

Now I am not saying that we should not score our code if not in this
way but it is kind of kidding ourselves if we don't face up to the
fact that coverage of lines of code or branches is not a completeness
criterium of some kind. I don't know if any of the mentioned tools
does analysis this thorough. But if any does we should go for that


On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
<> wrote:
> Starting with the honor system might be good.  It's not so easy some times to relate
lines of code to functionality.  Also just because it hits a line of code doesn't mean it's
really tested.
> Can't we just get people to just put a check mark on some table in the wiki?
> Darren
>> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <>
>> 1.It seems we already have a code coverage numbers using sonar as below. It currently
shows only the numbers for unit tests.
>> 2. The below link has an explanation for using it for both integration and unit tests.
>> 3. Many links suggests it has good decision coverage facility compared to other coverage
>> Regards,
>> Santhosh
>> ________________________________________
>> From: Laszlo Hornyak []
>> Sent: Monday, October 28, 2013 1:43 PM
>> To:
>> Subject: Re: Tiered Quality
>> Sonar already tracks the unit test coverage. It is also able to track the
>> integration test coverage, however this might be a bit more sophisticated
>> in CS since not all hardware/software requirements are available in the
>> jenkins environment. However, this could be a problem in any environment.
>>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <> wrote:
>>> We need a way to check coverage of (unit+integration) tests. How many
>>> lines of code hit on a deployed system that corresponds to the
>>> component donated/committed. We don't have that for existing tests so
>>> it makes it hard to judge if a feature that comes with tests covers
>>> enough of itself.
>>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>>>> Ok, makes sense, but that sounds like even more work :) Can you share the
>>>> plan on how will this work?
>>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>>>>> wrote:
>>>>> I think it can't be at a component level because components are too
>>> large.
>>>>> It needs to be at a feature for implementation level.  For example,
>>> live
>>>>> storage migration for xen and live storage migration for kvm (don't
>>> know if
>>>>> that's a real thing) would be two separate items.
>>>>> Darren
>>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>>>>> wrote:
>>>>>> I believe this will be very useful for users.
>>>>>> As far as I understand someone will have to qualify components. What
>>> will
>>>>>> be the method for qualification? I do not think simply the test
>>> coverage
>>>>>> would be right. But then if you want to go deeper, then you need
>>> bigger
>>>>>> effort testing the components.
>>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>>>>>>> wrote:
>>>>>>> I don't know if a similar thing has been talked about before
but I
>>>>>>> thought I'd just throws this out there.  The ultimate way to
>>>>>>> quality is that we have unit test and integration test coverage
>>> all
>>>>>>> functionality.  That way somebody authors some code, commits
to, for
>>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't
>>>>>>> the hook to manually tests the functionality with each release.
>>>>>>> obvious nature of a community project is that people come and
>>> If
>>>>>>> a contributor wants to ensure the long term viability of the
>>>>>>> component, they should ensure that there are unit+integration
>>>>>>> Now, for whatever reason whether good or bad, it's not always
>>> possible
>>>>>>> to have full integration tests.  I don't want to throw down the
>>> gamut
>>>>>>> and say everything must have coverage because that will mean
>>>>>>> useful code/feature will not get in because of some coverage
>>>>>>> possible at the time.
>>>>>>> What I propose is that for every feature or function we put it
in a
>>>>>>> tier of what is the quality of it (very similar to how OpenStack
>>>>>>> qualifies their hypervisor integration).  Tier A means unit test
>>>>>>> integration test coverage gates the release.  Tier B means unit
>>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>>>>>> can go through and classify the components and then as a community
>>> we
>>>>>>> can try to get as much into Tier A as possible.
>>>>>>> Darren
>>>>>> --
>>>>>> EOF
>>>> --
>>>> EOF
>>> --
>>> Prasanna.,
>>> ------------------------
>>> Powered by
>> --
>> EOF

View raw message