incubator-ooo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Weir <robw...@apache.org>
Subject Re: Willing help on Test
Date Wed, 02 Nov 2011 13:34:57 GMT
On Wed, Nov 2, 2011 at 4:13 AM, Oliver-Rainer Wittmann
<orwittmann@googlemail.com> wrote:
> Hi,
>
> On 02.11.2011 03:30, Raphael Bircher wrote:
>>
>> ...
>>
>> But anyway. The workflow by OOo has changed, and we need to reorganize
>> the QA. And first we should reorganize the QA and then we should talk
>> about the needed tools. Thats my option.
>>
>
> I agree here with Raphael that things have changed.
> Former stuff worked, but there is no need to recover everything.
>
> Just a developer's point of view on this.
>


So what do we have?  What do we need?

I have no idea how QA was done before for OpenOffice.org, but it make
sense that you have basic elements like:


1) Unit tests that developers can execute before checking in code.  We
already have those, right?  Are they working?  Do they have good
coverage?  Would it be worth improving testing at that level?

2) Manual scripted tests.  This could be based on written test cases
and test documents.  These tests require some expertise to
design/write, but once the test cases are written they can be tested
by a much larger set of volunteers.  Even power users could be helpful
here.  A good tester follows the test case, but also has skills in
describing a bug in the defect report, with all necessary detail, but
little extraneous detail.  They know "how to think like a bug".

3) Free form testing.  Volunteers are asked to test a new build with
little additional direction.  Maybe they are asked to "focus" on a
particular area, but they are not following a test script or plan.
This is not really an quality engineering approach since we have no
idea of test coverage or effort.  So in practice this is not
sufficient, though it could supplement a QA plan.

4) Scripted/automated testing via the GUI.  Requires more effort and
skill  to write and maintain, but once done, it requires less effort
to execute.

For all of these, the question we should be asking as a product is --
is it ready to release?  What is our confidence that the release does
not have a horrible bug some place?   In other words, a key question
is test coverage?  What portion of the product's features have been
tested?

So how do we get started on this?  Do we have test plans that if we
found volunteers to execute the tests, even manually, they would give
high test coverage?

Also, with existing volunteers, what mix of skills do we have?  Test
plan development?  Test execution?  Bugzilla issue verification?  Test
automation design and coding?

-Rob


> Best regards, Oliver.
>

Mime
View raw message