cordova-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jesse <purplecabb...@gmail.com>
Subject Re: Mobile spec tests and exclusion list
Date Tue, 29 Oct 2013 18:37:48 GMT
I commented on the pull request directly.
Also, be sure to apply the same changes to the tests in the contacts plugin
repo as well. ( for now we still have to do this every time )

@purplecabbage
risingj.com


On Tue, Oct 29, 2013 at 11:24 AM, Sergey Grebnov (Akvelon) <
v-segreb@microsoft.com> wrote:

> Michal, Jesse, David, Brian thank you for your input. Since big changes
> are coming how we plug and execute tests I've temporary disabled a few
> tests which require user interaction (wp8 only) so that they could run with
> Medic. Could someone review and merge this change
> https://github.com/apache/cordova-mobile-spec/pull/40
>
> Thx!
> Sergey
> -----Original Message-----
> From: mmocny@google.com [mailto:mmocny@google.com] On Behalf Of Michal
> Mocny
> Sent: Tuesday, October 29, 2013 8:55 AM
> To: dev
> Subject: Re: Mobile spec tests and exclusion list
>
> Thats a cool heuristic for human readability (actually I think I'll adopt
> it to some extent), but for something like CI (buildbot/medic) which
> automatically emails if anything breaks, that solution doesn't quite
> suffice.
>
> However, its an interesting point that we could make use of some
> domain-specific syntax in our test descriptions as a way to implement the
> "expect fails on blah" if the testing framework doesn't have a better
> solution and we can't implement it ourselves.  Tomorrow I'll see if jasmine
> just has an easy solution to this before we start grasping at straws.
>
> -Michal
>
>
> On Mon, Oct 28, 2013 at 6:43 PM, Smith, Peter <peters@fast.au.fujitsu.com
> >wrote:
>
> > FWIW, here is another idea for consideration.
> >
> > For any additional "mobile-spec" tests which we developed in-house we
> > have adopted a convention of prefixing a JIRA reference for any test
> > which is known to fail (for whatever reason). This is based on the
> > assertion that any failing test ought to have some accompanying JIRA.
> >
> > For example, see the [CB-NNNN] in this test case:
> >
> > {code}
> > describe("ContactAddress", function() {
> >     it("fj.contacts.cf.1 - [CB-4849] non-string type for postalCode",
> > function() {
> >     ...
> >     }
> > }
> > {code}
> >
> > So:
> >
> > * If a test case fails (is red) but has no JIRA reference then it
> > represent a new kind failure and needs investigation ASAP and possibly
> > a new JIRA or a re-write of the test.
> >
> > * If a test case fails (is red) but already had a JIRA reference, then
> > most likely the reason for failure is described by that JIRA. There is
> > no guarantee the failure is caused by same reason but it is generally
> > of less immediate concern. In any case it is easy enough to look up
> > the JIRA number to check.
> >
> > * If a test succeeds (is green) in all environments but has a JIRA
> > reference then you need to check if the JIRA was fixed and probably
> > remove the reference from the test item description.
> >
> > -
> >
> > Yeah it's a bit clunky, and certainly not foolproof, but just being
> > able to see the JIRA references in the test result summary removes
> > most of the guesswork about if a particular test 46 was known to fail or
> not...
> >
> > Peter
> >
> > -----Original Message-----
> > From: David Kemp [mailto:drkemp@google.com]
> > Sent: Tuesday, 29 October 2013 2:20 AM
> > To: dev@cordova.apache.org
> > Subject: Re: Mobile spec tests and exclusion list
> >
> > Specifically I am thinking of  a test that passes on one
> > platform/device but will not pass on another one.
> > so maybe 'take them out' was poor language.
> >
> > If at all possible, the test should be coded in some way to skip it or
> > change the expectation for that platform/device, rather than 'just
> > knowing'
> > that test 46 always fails on platform xx.
> >
> > This could be done in the test itself (inspect cordova-device or
> > something) or by some external file that describes expected results.
> > I would generally rather see it done explicitly in the test.
> >
> >
> >
> >
> > On Mon, Oct 28, 2013 at 10:10 AM, Michal Mocny <mmocny@chromium.org>
> > wrote:
> >
> > > Some test frameworks just have an "expectFailure", so a failed test
> > > actually lets the test suite pass, and a passed test makes it fail.
> > >
> > > -Michal
> > >
> > >
> > > On Mon, Oct 28, 2013 at 8:17 AM, David Kemp <drkemp@google.com> wrote:
> > >
> > > > -1 for known failing tests. You need to have them all pass for a
> > > > clean
> > > run.
> > > > If the tests don't work, take them out.
> > > >
> > > > I would support some additional functionality to the test runner
> > > > to allow marking tests.
> > > > We definitely have tests that are know to not work on a platform,
> > > > OS
> >
> > > > version or device.
> > > > Being able to embody that info in the test system would be great.
> > > >
> > > > Until we get more stuff cleaned up we also have tests that are
> > > > flakey and probably should just trigger a rerun if they fail.
> > > > My preference is to just fix those though.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Sat, Oct 26, 2013 at 11:02 PM, purplecabbage
> > > > <purplecabbage@gmail.com
> > > > >wrote:
> > > >
> > > > > Having a known failure in the tests on wp7 is no biggie, it has
> > > > > always been there. Just move on ...
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > > On Oct 26, 2013, at 3:24 PM, Michal Mocny
> > > > > > <mmocny@chromium.org>
> > > wrote:
> > > > > >
> > > > > > We have a proposal and prototype on the table right now for
> > > re-working
> > > > > > tests to ship with plugins, defined according to auto and
> > > > > > manual
> > > tests.
> > > > > >
> > > > > > To accomplish what you ask for would require a specialized
> > > > > > testing
> > > app
> > > > > that
> > > > > > simply runs both at the same time. (this wouldn't be the
> > > > > > default, but
> > > > > would
> > > > > > be easy to make).
> > > > > >
> > > > > > Thus, I think the tests shouldn't be modified (its hard to
> > > > > > state
> >
> > > > > > at
> > > > test
> > > > > > definition time in which fashion they should be used), the
> > > > > > test
> > > runner
> > > > > > should.  This wont solve the problem today, but perhaps in
> > > > > > about
> >
> > > > > > a
> > > > month
> > > > > it
> > > > > > could.
> > > > > >
> > > > > > -Micha
> > > > > >
> > > > > >
> > > > > > On Sat, Oct 26, 2013 at 6:41 AM, Sergey Grebnov (Akvelon) <
> > > > > > v-segreb@microsoft.com> wrote:
> > > > > >
> > > > > >> Hi Michael,
> > > > > >>
> > > > > >> Agree. But taking into account having a way to run all the
> > > > > >> tests (including ones w/ user interaction) is very useful
for
> > > > > >> Windows
> > > Phone
> > > > I
> > > > > >> propose the following
> > > > > >> 1. No changes for non-WP platforms 2. For WP
> > > > > >>  a) Use the following condition for the tests which require
> > > > > >> user interaction
> > > > > >>    define(..., function(...) {
> > > > > >>      if (isWP8 && !runAll) return;
> > > > > >>      expect(...);
> > > > > >>      ...
> > > > > >>    })
> > > > > >>  b) Current autotests will run w/o runAll option so won't
> > > > > >> require
> > > user
> > > > > >> interaction
> > > > > >>  c) Add 'Run All Tests (Extended)' option specifically for
> > > > > >> WP8 where
> > > > we
> > > > > >> will have runAll == true
> > > > > >>
> > > > > >> Motivation:
> > > > > >> 1. I don't think we should move such tests to manual tests
> > > > > >> for WP
> > > only
> > > > > to
> > > > > >> be consistent with other platforms - we actually test api
> > > > > >> call and
> > > > check
> > > > > >> result
> > > > > >> 2. By default all tests will run w/o any user interaction
3.
> > > > > >> We
> >
> > > > > >> will have an option to quickly check all api before release
> > > via
> > > > > Run
> > > > > >> All Tests (Extended). In other case we should have special
> > > information
> > > > > how
> > > > > >> to check all the api and not to forget to run such special
> > tests.
> > > > > >>
> > > > > >> Thx!
> > > > > >> Sergey
> > > > > >> -----Original Message-----
> > > > > >> From: mmocny@google.com [mailto:mmocny@google.com] On Behalf
> > > > > >> Of
> > > > Michal
> > > > > >> Mocny
> > > > > >> Sent: Saturday, October 26, 2013 4:12 AM
> > > > > >> To: dev
> > > > > >> Subject: Re: Mobile spec tests and exclusion list
> > > > > >>
> > > > > >> Auto tests should run automatically without intervention.
 If
> > > > > >> user
> > > > > actions
> > > > > >> is needed for test to pass, we should call that something
> > > > > >> different
> > > > > (manual
> > > > > >> tests have been used).
> > > > > >>
> > > > > >> I think some varient of #3 is fine, this isn't a common
> > > > > >> problem.  I wouldn't even test for Medic specifically, since
> > > > > >> I want my auto
> > > tests
> > > > to
> > > > > >> run automatically even when testing by hand.
> > > > > >>
> > > > > >> define(..., function(...) {
> > > > > >>  if (isWP8) return;
> > > > > >>  expect(...);
> > > > > >>  ...
> > > > > >> })
> > > > > >>
> > > > > >> -Michal
> > > > > >>
> > > > > >>
> > > > > >> On Fri, Oct 25, 2013 at 4:37 PM, Sergey Grebnov (Akvelon)
<
> > > > > >> v-segreb@microsoft.com> wrote:
> > > > > >>
> > > > > >>> Mobile spec autotests include tests which on some platforms
> > > > > >>> require user interaction to complete. For example, contact
> > > > > >>> save api on
> > > > Windows
> > > > > >>> Phone requires user to manually click on save  button.
This
> > > prevents
> > > > > >>> the tests to be run as part of Medic test automation
since
> > > > > >>> test
> > >  app
> > > > > >>> just hangs on such api calls.
> > > > > >>>
> > > > > >>> Is Windows Phone special or there are similar problem
on
> > > > > >>> other
> > > > > platforms?
> > > > > >>>
> > > > > >>> I'm thinking about the following possible approaches:
> > > > > >>> #1 Ad-hoc solution to Medic - replacing some test files
as
> > > > > >>> part of Medic functionality (some additional wp specific
> > > > > >>> build
> > step).
> > > > > >>> #2 Extending mobile spec functionality- adding something
> > > > > >>> like tests exclusion config where you can define test
ids
> > > > > >>> (or even the whole
> > > > api)
> > > > > >>> to be skipped. Such exclusion list could be generated
on the
> > > > > >>> fly
> > > and
> > > > > >>> put to the app before starting tests.
> > > > > >>> #3 If there are only few such tests we can probably
add
> > > > > >>> check for
> > > the
> > > > > >>> current platform to determine whether to include the
test.
> > > > > >>> For
> > > > example:
> > > > > >>> if(!(window.MedicTestRunner && isWP8)) {testDefinition}
 Or
> > > > > >>> the
> > > same
> > > > > >>> way but inside the test to fail gracefully.
> > > > > >>>
> > > > > >>> Thoughts?
> > > > > >>>
> > > > > >>> Thx!
> > > > > >>> Sergey
> > > > > >>
> > > > >
> > > >
> > >
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message