incubator-ooo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mathias Bauer <Mathias_Ba...@gmx.net>
Subject Re: Willing help on Test
Date Sat, 05 Nov 2011 13:10:55 GMT
Am 02.11.2011 14:34, schrieb Rob Weir:

> So what do we have?  What do we need?
> 
> I have no idea how QA was done before for OpenOffice.org, but it make
> sense that you have basic elements like:
> 
> 
> 1) Unit tests that developers can execute before checking in code.  We
> already have those, right?  Are they working?  Do they have good
> coverage?  Would it be worth improving testing at that level?

Unit tests exist only for some low level libraries. We have some so
called "complex tests" and some simple API tests. All of them are
definitely worth to get improved, it's the best we could do.

We never investigated coverage, so no idea how much code is covered by
these tests.

Writing unit tests for most of the "higher level" OOo code is hard or
close to impossible, as the code refuses to be run in a test harness.
Too many code is depending on too much other code.

> 2) Manual scripted tests.  This could be based on written test cases
> and test documents.  These tests require some expertise to
> design/write, but once the test cases are written they can be tested
> by a much larger set of volunteers.  Even power users could be helpful
> here.  A good tester follows the test case, but also has skills in
> describing a bug in the defect report, with all necessary detail, but
> little extraneous detail.  They know "how to think like a bug".

That might be comparable to what I called "complex test cases". As
writing unit tests is hard for many components, as mentioned above, this
is the kind of testing that gives us the most bang for the bug. The
build system has support for building and running them and basically all
of them could be run in parallel, if set up accordingly.

> 4) Scripted/automated testing via the GUI.  Requires more effort and
> skill  to write and maintain, but once done, it requires less effort
> to execute.

That depends on what you mean by "effort". The tests that we have run
awfully slow - even the most basic tests together sum up to a run time
of approximately 8 hours. If you wanted to run all tests that have been
written (and why wouldn't you want to do that?) you had to invest
several days for just one platform.

There are several reasons for that. It could be improved by running as
much tests in parallel as possible, using as much cores of the test
machine as possible. If the test infrastructure wouldn't have been so
byzantine and inflexible, we probably could have done that already.
The test tool and what we can still do with it might become a larger
topic, I will open a new thread for it.

Regards,
Mathias

Mime
View raw message