cordova-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Smith, Peter" <>
Subject RE: Understanding the intent of the test suites and how they fit together
Date Fri, 05 Jul 2013 04:33:22 GMT
Firstly, thank you for the useful information.

There were a couple of questions asked which I address here:


>> * There seems no explicit tests for any plugins. Omitted deliberately

>> assuming cordova-mobile-spec will work through all this code, or to

Brian wrote: What plugins do you mean?

I just meant that none of the JUnit-style test cases have any direct
reference to the Java plugin implementations.

Anyway, I understand now that this seems deliberate:
* Andrew wrote: "Omitted deliberately I think."
* Joe wrote: "We test the API through mobile-spec, so we don't consider
these tests as high a priority"
* Brian wrote: "We have tried to keep as much testing infra in the JS


Joe wrote: "Honestly, I curious what your intent is with this line of
questioning. Since this is an open source project, if you think that
this testing is lacking, you can always submit more tests"

Yes, I would like to contribute test cases - it depends on our own
project circumstances whether we get to do so. But it is not so simple
to know if/where testing is lacking without a good understanding of how
the existing suites work together, and which ones are/aren't being run,
their expected results, etc. Hence my questions.


-----Original Message-----
From: Smith, Peter [] 
Sent: Thursday, 4 July 2013 5:27 PM
Subject: Understanding the intent of the test suites and how they fit

Hi devs,


I would like to understand better about the actual intent of the various
cordova test suites for v2.x.


(This question was already asked to PhoneGap Forum last week but got not


>From what I can work out, the testing of cordova (for Android) is shared
between a number of projects:

* cordova-js

* cordova-android

* cordova-mobile-spec


Each project has test suites, and my understanding from reading/running
those various suites is as follows:


cordova-js tests

* ~400 jasmine test case

* Intent appears to be to exercise various parts of the cordova.js core

* Looks like focus is for testing framework things: like channels and
modules and message processing etc

* Specifically, the plugin coverage is not tested much in this suite. I
am guessing it is assumed that plugins are covered by


codova-mobile-spec test

* This project is nothing but a test suite

* Lots of manual/auto tests with fairly clear intent of being 100% the
cordova javascript device APIs

* 257 auto jasmine tests; 20-ish manual test buttons

* By our analysis the API coverage is currently something closer to 70%.

* Seems strange to not expect 100% success - it risks real bugs falling
through cracks if everybody assumes fail is normal. Apparently this is a
known issue on dev list.

* Also includes benchmark test cases (I haven't looked at these - they
are not part of the auto-tests - any documentation on them?)

* Some manual tests also: Presumably separated from auto-tests when a
user is required to visually confirm something happened to judge


cordova-android tests

* Tests are split into manual and JUnit-style tests

* There are about 20 of each. (Seems very small number - coverage of
Java classes is only about 10%).

* Most JUnit test cases seem focussed on behaviour of the
CordovaWebView. It looks like lots of Java code does not get touched by
test cases at all.

* There seems no explicit tests for any plugins. Omitted deliberately
assuming cordova-mobile-spec will work through all this code, or to do?

* There seems no explicit test for the Java-side of the message bridge.
To do?

* Manual tests suffer lots of fails in my environment - so much so that
it's not even clear if these manual tests are expected to work at all,
or are they just a bunch of ad-hoc test fragments which were useful once
but have since fallen on hard times




Please help us to understand the intent of each suite. From what I wrote
above, what is correct/incorrect? What's missing?


Basically we see a variety of test cases (some manual and some
automatic) across a number of projects, and are just trying to
understand how the whole jigsaw all fits together, and therefore how to
judge which bits might be missing.


Then there are more questions regarding testing prior releases: Which of
these test suites actually get run prior to a release? Just the
auto-ones? What is the required pass/error results for the release to




View raw message