nifi-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mr TheSegfault (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MINIFICPP-967) Improve SITs
Date Tue, 16 Jul 2019 16:43:00 GMT

     [ https://issues.apache.org/jira/browse/MINIFICPP-967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Mr TheSegfault updated MINIFICPP-967:
-------------------------------------
    Description: 
I'm using this ticket to collect various other tickets ( closed and open ) for adding them
to our docker test framework.

 

In my opinion this is the highest need in 0.7.0 in 0.8.0 and 0.9.0. Bugs will arise as a result
of this; however, there have been a variety of tickets created and closed across this and
elsewhere – so let's gather them into a single EPIC so we can all work on them together.

 

docker-verify

We have several stages of testing as you have across any project: unit, integration, and SIT
( system integration testing ).

 

1) V&V – Verify and validate interactions between internal and external components

   In many cases, we can create all appropriate BV, EC, and introspective test cases in
unit and integration tests, but there are many that we cannot capture without integration
with a C2 server, NiFi Instance, Kafka server, or other type of server. We should not attempt
to launch these services in unit or integration tests. External services and requirements
for comms should be verified and validated in docker contianers.

2)  Regression Testing – we should use this for all releases and before merging. This may
mean adding a targets if necessary, but the important thing is that we define expectations
within this test framework ( and the DSL ) and adhere to them. This may mean that we are testing
for memory leaks. We can do this in docker tests more easily than unit/integration tests.
ASAN is a great goal to have for upcoming releases and I think docker tests are where we can
build this functionality into so that we can test across versions of systems, glibc, and musl.

 

3) We have many versions of software that we can build upon. While we can mitigate factors
with including our own version of system libs, this won't solve all issues, and will not be
something that all consumers want. So we can mitigate these desires by using docker tests
to splay across platforms and build systems, executing docker tests.

 

4) Fuzzing. Fuzzing can and often should be done at every level, but we can very easily define
ways in the DSL to fuzz properties in processors. This mechanism should allow us to better
test boundary values and equivalence classes.

  was:
I'm using this ticket to collect various other tickets ( closed and open ) for adding them
to our docker test framework.

 

In my opinion this is the highest need in 0.7.0 in 0.8.0 and 0.9.0. Bugs will arise as a result
of this; however, there have been a variety of tickets created and closed across this and
elsewhere – so let's gather them into a single EPIC so we can all work on them together.


 

docker-verify

We have several stages of testing as you have across any project: unit, integration, and SIT
( system integration testing ).

 

1) V&V – Verify and validate interactions between internal and external components

   In many cases, we can create all appropriate BV, EC, and introspective test cases in
unit and integration tests, but there are many that we cannot capture without integration
with a C2 server, NiFi Instance, Kafka server, or other type of server. We should not attempt
to launch these services in unit or integration tests. External services and requirements
for comms should be verified and validated in docker contianers.


2)  Regression Testing – we should use this for all releases and before merging. This may
mean adding a targets if necessary, but the important thing is that we define expectations
within this test framework ( and the DSL ) and adhere to them. This may mean that we are testing
for memory leaks. We can do this in docker tests more easily than unit/integration tests.
ASAN is a great goal to have for upcoming releases and I think docker tests are where we can
build this functionality into so that we can test across versions of systems, glibc, and musl.

 

3) We have many versions of software that we can build upon. While we can mitigate factors
with including our own version of system libs, this won't solve all issues, and will not be
something that all consumers want. So we can mitigate these desires by using docker tests
to splay across platforms and build systems, executing docker tests.

 

4) Fuzzing. Fuzzing


> Improve SITs
> ------------
>
>                 Key: MINIFICPP-967
>                 URL: https://issues.apache.org/jira/browse/MINIFICPP-967
>             Project: Apache NiFi MiNiFi C++
>          Issue Type: Epic
>            Reporter: Mr TheSegfault
>            Priority: Blocker
>             Fix For: 0.9.0, 0.8.0, 0.7.0
>
>
> I'm using this ticket to collect various other tickets ( closed and open ) for adding
them to our docker test framework.
>  
> In my opinion this is the highest need in 0.7.0 in 0.8.0 and 0.9.0. Bugs will arise as
a result of this; however, there have been a variety of tickets created and closed across
this and elsewhere – so let's gather them into a single EPIC so we can all work on them
together.
>  
> docker-verify
> We have several stages of testing as you have across any project: unit, integration,
and SIT ( system integration testing ).
>  
> 1) V&V – Verify and validate interactions between internal and external components
>    In many cases, we can create all appropriate BV, EC, and introspective test cases
in unit and integration tests, but there are many that we cannot capture without integration
with a C2 server, NiFi Instance, Kafka server, or other type of server. We should not attempt
to launch these services in unit or integration tests. External services and requirements
for comms should be verified and validated in docker contianers.
> 2)  Regression Testing – we should use this for all releases and before merging. This
may mean adding a targets if necessary, but the important thing is that we define expectations
within this test framework ( and the DSL ) and adhere to them. This may mean that we are testing
for memory leaks. We can do this in docker tests more easily than unit/integration tests.
ASAN is a great goal to have for upcoming releases and I think docker tests are where we can
build this functionality into so that we can test across versions of systems, glibc, and musl.
>  
> 3) We have many versions of software that we can build upon. While we can mitigate factors
with including our own version of system libs, this won't solve all issues, and will not be
something that all consumers want. So we can mitigate these desires by using docker tests
to splay across platforms and build systems, executing docker tests.
>  
> 4) Fuzzing. Fuzzing can and often should be done at every level, but we can very easily
define ways in the DSL to fuzz properties in processors. This mechanism should allow us to
better test boundary values and equivalence classes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

Mime
View raw message