harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mikhail Loenko <mloe...@gmail.com>
Subject Re: [testing] code for exotic configurations
Date Thu, 26 Jan 2006 13:13:02 GMT
We have tests that verify that our security framework works with users
providers installed in the system.

Consider EncryptedPrivateKeyInfo class. To call one of its constructors
you need AlgorithmParameters instance from a provider. If the providers
installed in your system do not have implementation of AlgorithmParameters
(or if you misconfigured your build) you'll unable to test that constructor
of EncryptedPrivateKeyInfo.

So exotic config is: User's providers do not have implementation of
AlgorithmParameters.

The test that fails when no AlgorithmParameters exists can scare a user,
the test that silently skips has a chance to be skipping forever due to
misconfiguration.

That is an example.

Thanks,
Mikhail

On 1/26/06, George Harley1 <GHARLEY@uk.ibm.com> wrote:
> Hi Mikhail,
>
> Just to help clarify things, could you give us all some examples of what
> you define as an "exotic configuration" ?
>
> Thanks in advance,
> George
> ________________________________________
> George C. Harley
>
>
>
>
>
> Mikhail Loenko <mloenko@gmail.com>
> 26/01/2006 12:11
> Please respond to
> harmony-dev@incubator.apache.org
>
>
> To
> harmony-dev@incubator.apache.org, geir@pobox.com
> cc
>
> Subject
> Re: [testing] code for exotic configurations
>
>
>
>
>
>
> Do you mean that for a single test that verifies 10 lines of code
> working on very specific configuration I have to create a parallel test
> tree?
>
> What about tests that work in two different exotic configurations? Should
> we duplicate them?
>
> Thanks,
> Mikhail
>
> On 1/26/06, Geir Magnusson Jr <geir@pobox.com> wrote:
> > one solution is to simply group the "exotic" tests separately from the
> > main tests, so they can be run optionally when you are in that exotic
> > configuration.
> >
> > You can do this in several ways, including a naming convention, or
> > another parallel code tree of the tests...
> >
> > I like the latter, as it makes it easier to "see"
> >
> > geir
> >
> >
> > Mikhail Loenko wrote:
> > > Well let's start a new thread as this is more general problem.
> > >
> > > So if we have some code designed for some exotic configurations
> > > and we have tests that verify that exotic code.
> > >
> > > The test when run in usual configuration (not exotic one) should
> > > report something that would not scare people. But if one
> > > wants to test that specific exotic configuration that he should be
> > > able to easily verify that he successfully made required conf and
> > > the test worked well.
> > >
> > > The following options I see here:
> > > 1) introduce a new test status (like skipped) to mark those tests that
> > > did not actually run
> > > 2) agree on exact wording that the skipped tests would print to allow
> > > grep logs later
> > > 3) introduce tests-indicators that would fail when current
> > > configuration disallow
> > > running certain tests
> > >
> > > Please let me know what you think
> > >
> > > Thanks,
> > > Mikhail Loenko
> > > Intel Middleware Products Division
> > >
> > >
> >
>
>
>

Mime
View raw message