harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mikhail Loenko <mloe...@gmail.com>
Subject Re: [testing] code for exotic configurations
Date Tue, 31 Jan 2006 11:43:33 GMT

We deep into discussion of what the test types are, which is useful
and interesting
but we a little bit went away from the original problem statement. I'd
like to try to
put the original problem in different words because it seems to be some
misunderstanding here.

So, we have some code, for example security classlib. It works in some
environment that includes other classlib modules, pluggable providers, OS, HW,
etc. And there are tests (let's call them just tests, no unit, no
system, etc) that
verify that my security classlib works well in reasonable environment
(other classlib
modules follow the spec, the installed providers are from some
'preferred' list, OS
is ..., etc).

The tests pass, everybody is happy so far.

One day I develop a new exotic provider that contains SecureRandom
implementation based on Moon phase, Solar activity and hash of the whole
internet snapshot. And I want to see how my security classlib works
with that new

I run the tests and see that some of them fail. At this point I'd like
to be able to
distinguish between:
- tests that fail because of absence e.g. AlgorithmParameters implementation in
the new provider (like an EncryptedPrivateKeyInfo test from my
previous posting in
this thread) and
- tests that fail because my code is not ready for that so good random numbers
returned by the new provider.

I would not like having 2 tests for EncryptedPrivateKeyInfo: one for normal
environment that fail when no AlgorithmParameters implementation available and
another one for exotic configurations  that allow not to have

Another possible solution is to have the list of all possible
assumptions regarding
the configuration and put some test into exclusion list before run the
tests. In this
case I have to verify all possible assumptions to run any test

The third solution would be enable test reporting a warning that some
about the configuration or environment is not met. In normal
environment warnings
should be treated like errors. In exotic ones - in a different way.

One of the simplest solution would be - modify tests so that depending on some
system variable (like ignore.warnings) they will fail or pass.

Original approach used in security2 tests (log a warning) was not liked by the
community, so we have to find a good replacement.


View raw message