harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mikhail Loenko <mloe...@gmail.com>
Subject Re: [testing] code for exotic configurations
Date Fri, 27 Jan 2006 14:33:40 GMT
OK, let's see your contribution.

Meanwhile I've gone through the security2 unit tests; there are some of them
that *can not* do verification in some 'exotic' configurations, I've made them
reporting failure in this case.

And there are a few tests that *can* verify in some 'exotic' configuration only.
But this configuration is possible in only case that is legacy
providers are installed
in the system. Normally these tests do not verify anything. I've made this tests
reporting pass if configuration is not 'exotic'

So I'm going to return to both these types of tests when we have framework
good enough to work with things like we discussed

Thanks,
Mikhail.

On 1/27/06, George Harley <george.c.harley@googlemail.com> wrote:
> Hi Mikhail,
>
> Comments inlined below ...
>
> Best regards,
> George
> IBM UK
>
>
> Mikhail Loenko wrote:
> > Hi George,
> >
> > On 1/27/06, George Harley <george.c.harley@googlemail.com> wrote:
> >
> >> Sure. And it should be the role of the test framework to inform users
> >> about which tests got run, passed, failed, got skipped etc.
> >>
> >
> > see below
> >
> >
> >>> It would not disturb most of the people because the test will pass in 'bad'
> >>> environment. But those, who know about these tests will sometimes grep
> >>> logs to validate configuration.
> >>>
> >>>
> >> Why use logs to tell us information that the unit test framework can already
> >> provide ? Who wants to have to grep through log files when the test runner
> >> can provide us with what we need ?
> >>
> >
> > Sorry but I did not quite understand. I do not see anything related to
> > the 'skipped' status in the junit.framework.TestResult
> >
> > If there is such a status then it would be excellent, it is the option #1 in the
> > first mail in the thread
> >
> > Thanks,
> > Mikhail
> >
>
> OK. What I am driving at is extending the basic JUnit test runner with a
> JUnit decorator that could be employed to decide - at runtime - on
> whether or not certain test cases in the test suite should be run. The
> decorator could also write out what tests got skipped.
>
> How the "run this test/exclude this test" decision is reached could be
> done in a number of ways. One approach would be to capture the
> identities of all of the tests you want to skip inside an "exclusions
> list" text file. The file could also be used to hold information about
> why each test is being excluded and whatever else might be useful for
> reporting purposes or perhaps to help with the runtime decision making.
>
> As discussed a while ago, using the existing JUnit capabilities like
> the  junit.extensions.TestDecorator class means that we have the
> flexibility to run *the same* test suite with whatever exclusions we
> like according to whatever configuration we want to put in place. All
> that would change is the text file that we use as input to the decorator.
>
> I have some code that does just this. I hope to be able to contribute it
> very soon now.
>
>
> >
> >>> Thanks,
> >>> Mikhail
> >>>
> >>>
> >>>
> >>>
> >>>> Alternatively, they could be
> >>>> included as part of a general test suite but be purposely skipped over
at
> >>>> test execution time using a
> >>>> test exclusion list understood by the test runner.
> >>>>
> >>>>
> >>>> Best regards,
> >>>> George
> >>>> ________________________________________
> >>>> George C. Harley
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Tim Ellison <t.p.ellison@gmail.com>
> >>>> 27/01/2006 08:53
> >>>> Please respond to
> >>>> harmony-dev@incubator.apache.org
> >>>>
> >>>>
> >>>> To
> >>>> harmony-dev@incubator.apache.org
> >>>> cc
> >>>>
> >>>> Subject
> >>>> Re: [testing] code for exotic configurations
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> Anton Avtamonov wrote:
> >>>>
> >>>>
> >>>>>> Note that I could create my own provider and test with it, but
what I
> >>>>>>
> >>>>>>
> >>>> would
> >>>>
> >>>>
> >>>>>> really want is to test how my EncryptedPrivateKeyInfo works
with
> >>>>>> AlgorithmParameters from real provider as well as how my other
classes
> >>>>>>
> >>>>>>
> >>>> work
> >>>>
> >>>>
> >>>>>> with real implementations of crypto Engines.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Mikhail.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>> Hi Mikhail,
> >>>>> There are 'system' and 'unit' tests. Traditionally, unit tests are
of
> >>>>> developer-level. Each unit test is intended to test just a limited
> >>>>> piece of functionality separately from other sub-systems (test for
one
> >>>>> fucntion, test for one class, etc). Such tests must create a desired
> >>>>> environment over the testing fucntionality and run the scenario
in the
> >>>>> predefined conditions. Unit tests usually able to cover all scenarios
> >>>>> (execution paths) for the tested parts of fucntionality.
> >>>>>
> >>>>> What are you talking about looks like 'system' testing. Such tests
> >>>>> usually run on the real environment and test the most often scenarious
> >>>>> (the reduntant set, all scenarios usually cannot be covered). Such
> >>>>> testing is not concentrated on the particular fucntionality, but
> >>>>> covers the work of the whole system.
> >>>>> A sample is: "run some demo application on some particular platform,
> >>>>> with some particular providers installed and perform some operations".
> >>>>>
> >>>>> I think currently we should focus on 'unit' test approach since
it is
> >>>>> more applicable during the development (so my advise is to revert
your
> >>>>> tests to install 'test' providers with the desired behavior as George
> >>>>> proposed).
> >>>>> However we should think about 'system' scenarios which can be run
on
> >>>>> the later stage and act as 'verification' of proper work of the
entire
> >>>>> system.
> >>>>>
> >>>>>
> >>>> I agree with all this.  The unit tests are one style of test for
> >>>> establishing the correctness of the code.  As you point out the unit
> >>>> tests typically require a well-defined environment in which to run,
and
> >>>> it becomes a judgment-call as to whether a particular test's
> >>>> environmental requirements are 'reasonable' or not.
> >>>>
> >>>> For example, you can reasonably expect all developers to have an
> >>>> environment to run unit tests that has enough RAM and a writable disk
> >>>> etc. such that if those things do not exist the tests will simply fail.
> >>>>  However, you may decide it is unreasonable to expect the environment
to
> >>>> include a populated LDAP server, or a carefully configured RMI server.
> >>>> If you were to call that environment unreasonable then testing JNDI
and
> >>>> RMI would likely involve mock objects etc. to get good unit tests.
> >>>>
> >>>> Of course, as you point out, once you are passing the unit tests you
> >>>> also need the 'system' tests to ensure the code works in a real
> >>>> environment.  Usage scenarios based on the bigger system are good, as
is
> >>>> running the bigger system's test suite on our runtime.
> >>>>
> >>>> Regards,
> >>>> Tim
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> --
> >>>>> Anton Avtamonov,
> >>>>> Intel Middleware Products Division
> >>>>>
> >>>>>
> >>>>>
> >>>> --
> >>>>
> >>>> Tim Ellison (t.p.ellison@gmail.com)
> >>>> IBM Java technology centre, UK.
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>
> >>>
> >>
> >
> >
>
>

Mime
View raw message