Return-Path: Delivered-To: apmail-incubator-harmony-dev-archive@www.apache.org Received: (qmail 55306 invoked from network); 27 Jan 2006 13:35:16 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (209.237.227.199) by minotaur.apache.org with SMTP; 27 Jan 2006 13:35:16 -0000 Received: (qmail 70136 invoked by uid 500); 27 Jan 2006 13:35:11 -0000 Delivered-To: apmail-incubator-harmony-dev-archive@incubator.apache.org Received: (qmail 70074 invoked by uid 500); 27 Jan 2006 13:35:11 -0000 Mailing-List: contact harmony-dev-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: harmony-dev@incubator.apache.org Delivered-To: mailing list harmony-dev@incubator.apache.org Received: (qmail 70061 invoked by uid 99); 27 Jan 2006 13:35:11 -0000 Received: from asf.osuosl.org (HELO asf.osuosl.org) (140.211.166.49) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jan 2006 05:35:11 -0800 X-ASF-Spam-Status: No, hits=1.2 required=10.0 tests=RCVD_IN_SORBS_WEB,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (asf.osuosl.org: domain of george.c.harley@googlemail.com designates 66.249.92.197 as permitted sender) Received: from [66.249.92.197] (HELO uproxy.gmail.com) (66.249.92.197) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 27 Jan 2006 05:35:09 -0800 Received: by uproxy.gmail.com with SMTP id m3so902016ugc for ; Fri, 27 Jan 2006 05:34:48 -0800 (PST) DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=googlemail.com; h=received:message-id:date:from:reply-to:user-agent:mime-version:to:subject:references:in-reply-to:content-type:content-transfer-encoding; b=AoXw7Vlz6IWmdYZFDPCHGfNXNpeB+tBWxJ+MhfCqcbB9eHuDVzbxijZW/49AEvgMCfOluCEoBtpOArd488GLvkdfyptyU+FmNF8Xoq9LLwvgS5iLiX0m/moLY1O/+virygqA+m62WImNNOOA+fWZPWZpOBhyJFobPkUyXqhj4j0= Received: by 10.48.221.19 with SMTP id t19mr307594nfg; Fri, 27 Jan 2006 05:34:48 -0800 (PST) Received: from ?9.20.183.165? ( [195.212.29.83]) by mx.gmail.com with ESMTP id d2sm249086nfe.2006.01.27.05.34.47; Fri, 27 Jan 2006 05:34:47 -0800 (PST) Message-ID: <43DA2177.8090403@googlemail.com> Date: Fri, 27 Jan 2006 13:34:47 +0000 From: George Harley Reply-To: george.c.harley@googlemail.com User-Agent: Thunderbird 1.5 (Windows/20051201) MIME-Version: 1.0 To: harmony-dev@incubator.apache.org Subject: Re: [testing] code for exotic configurations References: <43D9DF6C.5080508@gmail.com> <906dd82e0601270441y54061d5k9415277613735dc7@mail.gmail.com> <43DA18A9.2000609@pobox.com> In-Reply-To: <43DA18A9.2000609@pobox.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org X-Spam-Rating: minotaur.apache.org 1.6.2 0/1000/N Hi Geir, > This is why I advocate making a "separate tree" for the system tests - make it clear that they are not the general unit tests... Right. But rather than being a separate directory tree in the source repository that separate tree could be realised as a logical JUnit test suite grouping. So, for instance, say we have a bunch of "general" test cases. We would add them into an executable JUnit test suite called say, TestSuiteGeneral. When TestSuiteGeneral gets run then only the test cases that we specifically added to that suite are executed. There is no "skipping" of tests because we only ran those tests that we had previously configured (in the TestSuiteGeneral code) to run. Any "special" tests that either need specialist configuration or else should not be run by just anyone could be programatically grouped into a separate executable JUnit test suite. When that suite gets run it *only* executes our group of special tests. Just like we can let Junit tell us about what passed, failed etc, so we can use existing JUnit practices to put our test cases into whatever runtime grouping we like. That's clean and simple. Best regards, George Geir Magnusson Jr wrote: > > > Mikhail Loenko wrote: >> On 1/27/06, George Harley1 wrote: >>> But because we live in a less than ideal world there will, no doubt, be >>> some tests that will demand an >>> environment that is impossible or at the very least difficult to >>> mock up >>> for the majority of developers/testers. >> >> I'm absolutely agree that we are neither living in the ideal world >> nor trying >> to make it ideal :) >> >> So until we got a 'system' test suite why should we weaken existing >> tests? >> >>> One solution could be to segregate those tests into a separate test >>> suite >>> (available for all but primarily >>> for those working in the niche area that demands the special >>> environment). >> >> Moving this kind of tests would affect many people: they will see >> separate suites, >> try, ask questions... >> >> If the test can be configured by a few people only who works on that >> specific area and those people are aware of those tests why not just >> print a log when the test is skipped? > > Because the same set of people that will be bothered by separate > suites will have the same reaction to skipped tests. > > This is why I advocate making a "separate tree" for the system tests - > make it clear that they are not the general unit tests... Geir > Magnusson Jr wrote: >> >> >> Mikhail Loenko wrote: >>> On 1/27/06, George Harley1 wrote: >>>> But because we live in a less than ideal world there will, no >>>> doubt, be >>>> some tests that will demand an >>>> environment that is impossible or at the very least difficult to >>>> mock up >>>> for the majority of developers/testers. >>> >>> I'm absolutely agree that we are neither living in the ideal world >>> nor trying >>> to make it ideal :) >>> >>> So until we got a 'system' test suite why should we weaken existing >>> tests? >>> >>>> One solution could be to segregate those tests into a separate test >>>> suite >>>> (available for all but primarily >>>> for those working in the niche area that demands the special >>>> environment). >>> >>> Moving this kind of tests would affect many people: they will see >>> separate suites, >>> try, ask questions... >>> >>> If the test can be configured by a few people only who works on that >>> specific area and those people are aware of those tests why not just >>> print a log when the test is skipped? >> >> Because the same set of people that will be bothered by separate >> suites will have the same reaction to skipped tests. >> >> This is why I advocate making a "separate tree" for the system tests >> - make it clear that they are not the general unit tests... >> >>> >>> It would not disturb most of the people because the test will pass >>> in 'bad' >>> environment. But those, who know about these tests will sometimes grep >>> logs to validate configuration. >> >> IMO, there's too much special information there, too much config. >> I'm a simple person, and like things clean and simple. I don't like >> to mix concerns when possible, and here's a place where it's >> definitely possible to separate cleanly. >> >> I don't see the downside. >> >> geir >> >>> >>> Thanks, >>> Mikhail >>> >>> >>>> Alternatively, they could be >>>> included as part of a general test suite but be purposely skipped >>>> over at >>>> test execution time using a >>>> test exclusion list understood by the test runner. >>>> >>>> >>>> Best regards, >>>> George >>>> ________________________________________ >>>> George C. Harley >>>> >>>> >>>> >>>> >>>> >>>> Tim Ellison >>>> 27/01/2006 08:53 >>>> Please respond to >>>> harmony-dev@incubator.apache.org >>>> >>>> >>>> To >>>> harmony-dev@incubator.apache.org >>>> cc >>>> >>>> Subject >>>> Re: [testing] code for exotic configurations >>>> >>>> >>>> >>>> >>>> >>>> >>>> Anton Avtamonov wrote: >>>>>> Note that I could create my own provider and test with it, but >>>>>> what I >>>> would >>>>>> really want is to test how my EncryptedPrivateKeyInfo works with >>>>>> AlgorithmParameters from real provider as well as how my other >>>>>> classes >>>> work >>>>>> with real implementations of crypto Engines. >>>>>> >>>>>> Thanks, >>>>>> Mikhail. >>>>>> >>>>> Hi Mikhail, >>>>> There are 'system' and 'unit' tests. Traditionally, unit tests are of >>>>> developer-level. Each unit test is intended to test just a limited >>>>> piece of functionality separately from other sub-systems (test for >>>>> one >>>>> fucntion, test for one class, etc). Such tests must create a desired >>>>> environment over the testing fucntionality and run the scenario in >>>>> the >>>>> predefined conditions. Unit tests usually able to cover all scenarios >>>>> (execution paths) for the tested parts of fucntionality. >>>>> >>>>> What are you talking about looks like 'system' testing. Such tests >>>>> usually run on the real environment and test the most often >>>>> scenarious >>>>> (the reduntant set, all scenarios usually cannot be covered). Such >>>>> testing is not concentrated on the particular fucntionality, but >>>>> covers the work of the whole system. >>>>> A sample is: "run some demo application on some particular platform, >>>>> with some particular providers installed and perform some >>>>> operations". >>>>> >>>>> I think currently we should focus on 'unit' test approach since it is >>>>> more applicable during the development (so my advise is to revert >>>>> your >>>>> tests to install 'test' providers with the desired behavior as George >>>>> proposed). >>>>> However we should think about 'system' scenarios which can be run on >>>>> the later stage and act as 'verification' of proper work of the >>>>> entire >>>>> system. >>>> I agree with all this. The unit tests are one style of test for >>>> establishing the correctness of the code. As you point out the unit >>>> tests typically require a well-defined environment in which to run, >>>> and >>>> it becomes a judgment-call as to whether a particular test's >>>> environmental requirements are 'reasonable' or not. >>>> >>>> For example, you can reasonably expect all developers to have an >>>> environment to run unit tests that has enough RAM and a writable disk >>>> etc. such that if those things do not exist the tests will simply >>>> fail. >>>> However, you may decide it is unreasonable to expect the >>>> environment to >>>> include a populated LDAP server, or a carefully configured RMI server. >>>> If you were to call that environment unreasonable then testing JNDI >>>> and >>>> RMI would likely involve mock objects etc. to get good unit tests. >>>> >>>> Of course, as you point out, once you are passing the unit tests you >>>> also need the 'system' tests to ensure the code works in a real >>>> environment. Usage scenarios based on the bigger system are good, >>>> as is >>>> running the bigger system's test suite on our runtime. >>>> >>>> Regards, >>>> Tim >>>> >>>> >>>>> -- >>>>> Anton Avtamonov, >>>>> Intel Middleware Products Division >>>>> >>>> -- >>>> >>>> Tim Ellison (t.p.ellison@gmail.com) >>>> IBM Java technology centre, UK. >>>> >>>> >>>> >>> >>> >> > > >> >> It would not disturb most of the people because the test will pass in >> 'bad' >> environment. But those, who know about these tests will sometimes grep >> logs to validate configuration. > > IMO, there's too much special information there, too much config. I'm > a simple person, and like things clean and simple. I don't like to > mix concerns when possible, and here's a place where it's definitely > possible to separate cleanly. > > I don't see the downside. > > geir > Geir Magnusson Jr wrote: >> >> >> Mikhail Loenko wrote: >>> On 1/27/06, George Harley1 wrote: >>>> But because we live in a less than ideal world there will, no >>>> doubt, be >>>> some tests that will demand an >>>> environment that is impossible or at the very least difficult to >>>> mock up >>>> for the majority of developers/testers. >>> >>> I'm absolutely agree that we are neither living in the ideal world >>> nor trying >>> to make it ideal :) >>> >>> So until we got a 'system' test suite why should we weaken existing >>> tests? >>> >>>> One solution could be to segregate those tests into a separate test >>>> suite >>>> (available for all but primarily >>>> for those working in the niche area that demands the special >>>> environment). >>> >>> Moving this kind of tests would affect many people: they will see >>> separate suites, >>> try, ask questions... >>> >>> If the test can be configured by a few people only who works on that >>> specific area and those people are aware of those tests why not just >>> print a log when the test is skipped? >> >> Because the same set of people that will be bothered by separate >> suites will have the same reaction to skipped tests. >> >> This is why I advocate making a "separate tree" for the system tests >> - make it clear that they are not the general unit tests... >> >>> >>> It would not disturb most of the people because the test will pass >>> in 'bad' >>> environment. But those, who know about these tests will sometimes grep >>> logs to validate configuration. >> >> IMO, there's too much special information there, too much config. >> I'm a simple person, and like things clean and simple. I don't like >> to mix concerns when possible, and here's a place where it's >> definitely possible to separate cleanly. >> >> I don't see the downside. >> >> geir >> >>> >>> Thanks, >>> Mikhail >>> >>> >>>> Alternatively, they could be >>>> included as part of a general test suite but be purposely skipped >>>> over at >>>> test execution time using a >>>> test exclusion list understood by the test runner. >>>> >>>> >>>> Best regards, >>>> George >>>> ________________________________________ >>>> George C. Harley >>>> >>>> >>>> >>>> >>>> >>>> Tim Ellison >>>> 27/01/2006 08:53 >>>> Please respond to >>>> harmony-dev@incubator.apache.org >>>> >>>> >>>> To >>>> harmony-dev@incubator.apache.org >>>> cc >>>> >>>> Subject >>>> Re: [testing] code for exotic configurations >>>> >>>> >>>> >>>> >>>> >>>> >>>> Anton Avtamonov wrote: >>>>>> Note that I could create my own provider and test with it, but >>>>>> what I >>>> would >>>>>> really want is to test how my EncryptedPrivateKeyInfo works with >>>>>> AlgorithmParameters from real provider as well as how my other >>>>>> classes >>>> work >>>>>> with real implementations of crypto Engines. >>>>>> >>>>>> Thanks, >>>>>> Mikhail. >>>>>> >>>>> Hi Mikhail, >>>>> There are 'system' and 'unit' tests. Traditionally, unit tests are of >>>>> developer-level. Each unit test is intended to test just a limited >>>>> piece of functionality separately from other sub-systems (test for >>>>> one >>>>> fucntion, test for one class, etc). Such tests must create a desired >>>>> environment over the testing fucntionality and run the scenario in >>>>> the >>>>> predefined conditions. Unit tests usually able to cover all scenarios >>>>> (execution paths) for the tested parts of fucntionality. >>>>> >>>>> What are you talking about looks like 'system' testing. Such tests >>>>> usually run on the real environment and test the most often >>>>> scenarious >>>>> (the reduntant set, all scenarios usually cannot be covered). Such >>>>> testing is not concentrated on the particular fucntionality, but >>>>> covers the work of the whole system. >>>>> A sample is: "run some demo application on some particular platform, >>>>> with some particular providers installed and perform some >>>>> operations". >>>>> >>>>> I think currently we should focus on 'unit' test approach since it is >>>>> more applicable during the development (so my advise is to revert >>>>> your >>>>> tests to install 'test' providers with the desired behavior as George >>>>> proposed). >>>>> However we should think about 'system' scenarios which can be run on >>>>> the later stage and act as 'verification' of proper work of the >>>>> entire >>>>> system. >>>> I agree with all this. The unit tests are one style of test for >>>> establishing the correctness of the code. As you point out the unit >>>> tests typically require a well-defined environment in which to run, >>>> and >>>> it becomes a judgment-call as to whether a particular test's >>>> environmental requirements are 'reasonable' or not. >>>> >>>> For example, you can reasonably expect all developers to have an >>>> environment to run unit tests that has enough RAM and a writable disk >>>> etc. such that if those things do not exist the tests will simply >>>> fail. >>>> However, you may decide it is unreasonable to expect the >>>> environment to >>>> include a populated LDAP server, or a carefully configured RMI server. >>>> If you were to call that environment unreasonable then testing JNDI >>>> and >>>> RMI would likely involve mock objects etc. to get good unit tests. >>>> >>>> Of course, as you point out, once you are passing the unit tests you >>>> also need the 'system' tests to ensure the code works in a real >>>> environment. Usage scenarios based on the bigger system are good, >>>> as is >>>> running the bigger system's test suite on our runtime. >>>> >>>> Regards, >>>> Tim >>>> >>>> >>>>> -- >>>>> Anton Avtamonov, >>>>> Intel Middleware Products Division >>>>> >>>> -- >>>> >>>> Tim Ellison (t.p.ellison@gmail.com) >>>> IBM Java technology centre, UK. >>>> >>>> >>>> >>> >>> >> > >> >> Thanks, >> Mikhail >> >> >>> Alternatively, they could be >>> included as part of a general test suite but be purposely skipped >>> over at >>> test execution time using a >>> test exclusion list understood by the test runner. >>> >>> >>> Best regards, >>> George >>> ________________________________________ >>> George C. Harley >>> >>> >>> >>> >>> >>> Tim Ellison >>> 27/01/2006 08:53 >>> Please respond to >>> harmony-dev@incubator.apache.org >>> >>> >>> To >>> harmony-dev@incubator.apache.org >>> cc >>> >>> Subject >>> Re: [testing] code for exotic configurations >>> >>> >>> >>> >>> >>> >>> Anton Avtamonov wrote: >>>>> Note that I could create my own provider and test with it, but what I >>> would >>>>> really want is to test how my EncryptedPrivateKeyInfo works with >>>>> AlgorithmParameters from real provider as well as how my other >>>>> classes >>> work >>>>> with real implementations of crypto Engines. >>>>> >>>>> Thanks, >>>>> Mikhail. >>>>> >>>> Hi Mikhail, >>>> There are 'system' and 'unit' tests. Traditionally, unit tests are of >>>> developer-level. Each unit test is intended to test just a limited >>>> piece of functionality separately from other sub-systems (test for one >>>> fucntion, test for one class, etc). Such tests must create a desired >>>> environment over the testing fucntionality and run the scenario in the >>>> predefined conditions. Unit tests usually able to cover all scenarios >>>> (execution paths) for the tested parts of fucntionality. >>>> >>>> What are you talking about looks like 'system' testing. Such tests >>>> usually run on the real environment and test the most often scenarious >>>> (the reduntant set, all scenarios usually cannot be covered). Such >>>> testing is not concentrated on the particular fucntionality, but >>>> covers the work of the whole system. >>>> A sample is: "run some demo application on some particular platform, >>>> with some particular providers installed and perform some operations". >>>> >>>> I think currently we should focus on 'unit' test approach since it is >>>> more applicable during the development (so my advise is to revert your >>>> tests to install 'test' providers with the desired behavior as George >>>> proposed). >>>> However we should think about 'system' scenarios which can be run on >>>> the later stage and act as 'verification' of proper work of the entire >>>> system. >>> I agree with all this. The unit tests are one style of test for >>> establishing the correctness of the code. As you point out the unit >>> tests typically require a well-defined environment in which to run, and >>> it becomes a judgment-call as to whether a particular test's >>> environmental requirements are 'reasonable' or not. >>> >>> For example, you can reasonably expect all developers to have an >>> environment to run unit tests that has enough RAM and a writable disk >>> etc. such that if those things do not exist the tests will simply fail. >>> However, you may decide it is unreasonable to expect the >>> environment to >>> include a populated LDAP server, or a carefully configured RMI server. >>> If you were to call that environment unreasonable then testing JNDI and >>> RMI would likely involve mock objects etc. to get good unit tests. >>> >>> Of course, as you point out, once you are passing the unit tests you >>> also need the 'system' tests to ensure the code works in a real >>> environment. Usage scenarios based on the bigger system are good, >>> as is >>> running the bigger system's test suite on our runtime. >>> >>> Regards, >>> Tim >>> >>> >>>> -- >>>> Anton Avtamonov, >>>> Intel Middleware Products Division >>>> >>> -- >>> >>> Tim Ellison (t.p.ellison@gmail.com) >>> IBM Java technology centre, UK. >>> >>> >>> >> >> >