polygene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kent SĂžlvsten <kent.soelvs...@gmail.com>
Subject Re: Test Points
Date Tue, 14 Jun 2016 11:39:53 GMT
This looks a bit similar to collection metrics for code running in

Maybe the same mechanism could serve multiple purposes?


Den 14-06-2016 kl. 13:32 skrev Niclas Hedhman:
> Well, I bet that it is relatively difficult for Akka, at least in the Java
> version. In Scala, they might have a compiler plugin to achieve this.
> For Zest, the situation is quite different. In fact, I don't even think we
> need any additional features in Zest Core to do this. Anyone can do it on
> their own using the regular SDK.
> Example;
> public interface SomeComposite
> {
>     void doSomething( String arg1, ZonedDateTime time );
>     ZonedDateTime findSomething( String arg );
>     MyState myState();  // getter for some other state.
> }
> @AppliesTo( TestPoint.class )
> public abstract class SomeCompositeTestpoint extends
> SideEffectOf<SomeComposite>
>     implements SomeComposite
> {
>     @Service
>     private TestPointReporter reporter;
>     @Invocation
>     private Method method;
>     public void doSomething( String arg1, ZonedDateTime time )
>     {
>         reporter.report( method, null, arg1, time );
>     }
>     public ZonedDateTime findSomething( String arg )
>     {
>         ZonedDateTime result = next.findSomething( arg );
>         reporter.report( method, result, arg );
>     }
> }
> That way, means that the TestPoint side-effect can be Generic and not
> needed to be created for each type to test. But possibly, the above should
> also do the assertions, otherwise the Reporter would need to have a
> subsystem for that. In which case, it is a matter of, how are the
> assertions being set up... Well, to obtain that from a service of some
> kind. And then some API/SPI is needed for that... and so on. SO I am not
> sure what is better.
> Anyway, at bootstrap the SideEffect is added when we do tests in question
> and otherwise not. So, the only "keep in sync" is @TestPoint will remain in
> the code and that could be a good thing. It could be important API points
> that we want to preserve, whereas other places we don't want TestPoints as
> that code is free to change and we should probably not build a fragile test
> system.
> My question is basically around different ways that this can be solved, and
> whether there might be some super-nice way to do this "if only we had this
> particular feature in Core". Otherwise, it can be made as a library.
> Cheers
> Niclas
> On Tue, Jun 14, 2016 at 7:05 PM, Sandro Martini <sandro.martini@gmail.com>
> wrote:
>> Hi all,
>> just for info I see some libraries like the great
>> [Akka](http://akka.io) that has an instrumented version too, so
>> if/when needed it's possible to use that version to gather more info
>> at runtime (via JMX, etc) have more asserts, and if needed even to set
>> some action ...
>> Maybe something like this could be applied here, with a variant in
>> published artifacts, but for sure it's not trivial to setup and keep
>> aligned with "normal" (not instrumented) version ...
>> What do you think ?
>> Bye,
>> Sandro
>> 2016-06-14 2:26 GMT+02:00 Niclas Hedhman <niclas@hedhman.org>:
>>> Hi,
>>> I just had a revelation, watching Uncle Bob about TDD, combined with my
>>> knowledge about electronics design which uses Test Points (both at board
>>> level as well as silicon level)
>>> Since Zest "owns" the call chain, we could rather easily design a feature
>>> that is the equivalent to Test Points in electronics. Places where values
>>> are checked against an expectations.
>>> Isn't that what "assert" keyword is all about?
>>> Yes and no.
>>> Assert keyword can only tell if the value is within an expected range. It
>>> is rather difficult to communicate to assert what values are expected
>> right
>>> now.
>>> What is the purpose of this?
>>> Well, I think that unit tests are a little bit "weak" since it is
>> difficult
>>> to test that sequencing is correct, that interdependent computations are
>>> accurate and many other "functional" and "acceptance" test level issues.
>> I
>>> think we can solve this rather neatly, by assembling the "real"
>> application
>>> with additional Test Points (well the annotations are there all the
>> time),
>>> Memory Es and feed actually data through and validate the results.
>>> So how is this going to work?
>>> Well, I don't know yet. But I imagine that One test consists of S setup
>>> steps, N steps of execution, and M results from T test points. Details
>> not
>>> clear yet.
>>> I imagine that the Test Point is a combo of a SideEffect, an annotation
>> and
>>> Reporting service. I imagine that the side effect has some way to know
>> what
>>> is expected to happen at each test point.
>>> This is GutFeeling(tm) innovation at the moment, but I think there is
>>> strong values in here.
>>> As usual, feedback is most welcome...
>>> Cheers
>>> Niclas

View raw message