activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From artnaseef <>
Subject [DISCUSS] Releases and Testing
Date Sun, 01 Feb 2015 00:33:07 GMT
Defining a consistent approach to tests for releases will help us both
near-term and long-term come to agreement on (a) how to maintain quality
releases, and (b) how to improve the tests in a way that serves the needs of

As a general practice, tests that are unreliable raise a major question -
just how valuable are the tests?  With enough unreliable tests, can we ever
expect a single build to complete successfully?

How can we ensure the quality of ActiveMQ is maintained, and tests are
safeguarding the solution from the introduction of bugs, in light of these

Putting some ideals here so we have the "end in mind" (Stephen Covey) --
i.e. so they can help us move in the right direction overall.  These are
definitely not feasible within any reasonable timeframe.

Putting on my "purist" hat -- ideally, we would analyze every test to
determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
by the test.  From there, it would be possible to look for methods of
distinguishing false-negatives and false-positives (for example, by
reviewing logs) and improving the tests so they hopefully never end in false

Another ideal approach - return to the drawing board and define all of the
test scenarios needed to ensure ActiveMQ operates properly, then determine
the most reliable way to cover those test scenarios.  Discard redundant
tests and replace unreliable ones with reliable ones.

*Approach for Releases*
Back to the focus of this thread - let's define an acceptable approach to
the release.  Here is an idea to get the discussion started:

- Run the build with the Maven "-fn" flag (fail-none), then review all
failed tests and determine a course of action for each:
  - Re-run the test if there is reason (preferably a clear, documented
reason) to believe the failure was a false-negative (e.g. a test that
times-out too aggressively)
  - Declare the failure a bug (or at least, a suspected bug), create a Jira
entry, and resolve
  - Replace the test with a more reliable alternative that addresses the
same underlying concern as the original test

*Call for Feedback*
To move this discussion forward, please provide as much negative feedback as
necessary and, at the same time, please provide reasoning or ideas that can
help move things forward.  Criticism (unactionable feedback) is discouraging
and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
even for small, easily-addressed issues, without any offer to assist is
getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
care of that myself right now."

Let's get this solved, continue with frequent releases, and then move
forward in improving ActiveMQ and enjoying the results!

Expect another thread soon with ideas on improving the tests in general.

View this message in context:
Sent from the ActiveMQ - Dev mailing list archive at

View raw message