corinthia-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "jan iversen (JIRA)" <>
Subject [jira] [Updated] (COR-7) Select automated unit test tool to blackbox test libraries.
Date Sun, 18 Jan 2015 09:02:34 GMT


jan iversen updated COR-7:
    Component/s: Consumers - dftest
                 blackbox testing

> Select automated unit test tool to blackbox test libraries.
> -----------------------------------------------------------
>                 Key: COR-7
>                 URL:
>             Project: Corinthia
>          Issue Type: Wish
>          Components: blackbox testing, Consumers - dftest
>         Environment: source
>            Reporter: jan iversen
>            Priority: Minor
>             Fix For: 0.5
> I will try to reach out in ASF, to see what is the recommend tool for doing black box
library testing and documenting.
> peter: 
> This is already in place, with over 1200 test cases written so far.
> I need to document this, but basically when you run the following command:
> dfutil -test $DOCFORMATS_DIR/tests
> it recursively scans through the directory looking for files with the .test extension
and runs them. These have a specific structure, where the first line represents the function
to be tested, and the rest of the file is divided in parts specifying the input and output.
The dfutil program prints the name and result of each test, and the number of passes/failures
at the end.
> For example, tests/tables/word/update-bmerge01.test calls the Word_testUpdate function
(we could arguably rename these to be a bit more generic, e.g. "word-update", leaving out
"test"). The input to the test is everything from the line "#item input.docx" up until the
next "#item" line (note these can be nested; document.xml and styles.xml both form part of
the inout here). Then there is "#item input.html" (in this case, the file to be updated),
and finally an "#item expected" (which the test harness compares it against).
> This test harness has an include facility, for tests that share data in common, e.g.
stylesheets in most cases.
> You can also run the following command
> dfutil -testdiff $DOCFORMATS_DIR/tests/tables/word/update-bmerge01.test
> and, if this is a failing test, it will display a diff between the actual and expected
> Running the test harness through valgrind is also an effective way to check for memory
corruption or memory leaks.
> See TestFunctions.c for a list of all the functions can be tested. This I think needs
some cleanup and we need to made it easier to add new functions.
> jan: 
> As far as I can you only have an executable that can execute a test case.
> An automated test program contains a lot more.
>     Description of each test case
>     Dependency tree (if test foo fails dont run....)
>     Generation of test release notes (basically the description)
>     Batch generation of test suites.
> peter: By batch generation, are you referring to creating test files automatically? Or
executing them in batch? Also I've just put up a wiki page describing the test harness as
it currently exists.
> jan: 
> super...I mean create.
> With a test tool, you can e.g. tell it to generate a test script for
>     all test cases added after last release (corresponding to all bugs solved)
>     all test cases relating to a specific catagory (grouping)
> This gives a highly flexible test, which is nice when you have 10.000+ test cases, which
we should have.
> peter: For the grouping, do you think that could be achieved with a suitable directory
organisation, or do we need something more expressive (e.g. to specify sets that can't simply
be expressed as parts of a tree)?
> jan:  would use a simple directory structure for the function test directory
pr function group (e.g. document format).
> This is purely for organizing the test cases physically, so everybody knows where they
belong. When running test cases I would also create virtual groups (tools does that) like
e.g. all changed test cases since last release.
> peter: 
> What do you think of the following?
> general/bdt/move
> general/bdt/remove
> general/changes
> html/normalization
> css/properties
> css/numbering
> latex/basic
> latex/captions
> latex/formatting
> latex/formattingrun
> latex/headings
> latex/tables
> ooxml/word/basic
> ooxml/word/bookmarks
> ooxml/word/captions
> ooxml/word/changetracking
> ooxml/word/extra
> ooxml/word/fonts
> ooxml/word/formatting
> ooxml/word/formatting/docdefaults
> ooxml/word/formatting/highlight
> ooxml/word/formatting/paragraph
> ooxml/word/formatting/run
> ooxml/word/headings
> ooxml/word/images
> ooxml/word/indentation
> ooxml/word/links
> ooxml/word/lists
> ooxml/word/mergeruns
> ooxml/word/notes
> ooxml/word/numbering
> ooxml/word/page
> ooxml/word/references
> ooxml/word/styles
> ooxml/word/tables
> ooxml/word/styles
> ooxml/word/toc
> ooxml/word/unsupported
> ooxml/word/whitespace
> Note the above is just a reorganisation of what's there now, and adding ooxml to cater
for future support for other aspects of the spec.
> If you're ok with this I can go ahead and restructure the directories (this will be mostly
just moving files but I think there may be a few references to included files to fix up)
> jan: looks OK to me.
> dennis: 
> I think this is tangential although I didn't want to cause duplicate issues on tests.
If we are talking about generating and verifying test documents, I just saw this announcement
> which points to an ODFAutoTests tool here on GitHub:
> The tool is written in Java and provided under a BSD-like license. It is limited to checking
that a particular generated test document round-trips through a number of selected applications.

This message was sent by Atlassian JIRA

View raw message