activemq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clebert <clebert.suco...@gmail.com>
Subject Re: [DISCUSS] Releases and Testing
Date Sun, 01 Feb 2015 20:21:46 GMT
I understand. I'm just saying this could be done through the newer branches. It's a better
strategy on moving forward IMHO. 

-- Clebert Suconic typing on the iPhone. 

> On Feb 1, 2015, at 13:13, Jamie G. <jamie.goodyear@gmail.com> wrote:
> 
> ActiveMQ 5.x is in wide deployment, improving the community's ability
> to maintain the code and deliver service releases is good.
> 
> Breaking the tests into 'release' and 'deep testing' does make sense
> in the context of 10 hour builds. The goal is still having end users
> being able to run said tests successfully.  I'm just suggesting a
> community oriented approach to tackling the project.
> 
> Cheers,
> Jamie
> 
>> On Sun, Feb 1, 2015 at 2:14 PM, Clebert <clebert.suconic@gmail.com> wrote:
>> Please look at my post regarding the testsuite. Why you guys don't contribute effort
towards activemq-6 branch ? There's an ongoing effort there.
>> 
>> -- Clebert Suconic typing on the iPhone.
>> 
>>> On Feb 1, 2015, at 12:34, Jamie G. <jamie.goodyear@gmail.com> wrote:
>>> 
>>> The choice to fix, refactor, or remove test cases should be reasonably
>>> straight forward on a case by case basis - the real challenge in my
>>> mind is the volume to be reviewed.
>>> 
>>> Perhaps the AMQ community could parcel the test cases into small sets,
>>> each tracked by a Jira task. These sets could then be posted into a
>>> community page tracking, showing which ones have been reviewed, which
>>> are under review, and which ones have not been touched.
>>> 
>>> The reason I'd like to see a table tracking these test case set
>>> reviews is that it would provide new contributors an easy way to see
>>> where they could jump in and help out -- much like the old Servicemix
>>> community wish page (That's how I was able to jump in and start
>>> helping effectively back in the day). Many hands making the work
>>> light.
>>> 
>>> The over head of having the tracking table, Jiras, and co-ordination
>>> should be offset by having the work spread well over many people, and
>>> providing new contributors a great way to start interacting with the
>>> community.
>>> 
>>> Cheers,
>>> Jamie
>>> 
>>>> On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <art@artnaseef.com> wrote:
>>>> *Overview*
>>>> Defining a consistent approach to tests for releases will help us both
>>>> near-term and long-term come to agreement on (a) how to maintain quality
>>>> releases, and (b) how to improve the tests in a way that serves the needs
of
>>>> releases.
>>>> 
>>>> As a general practice, tests that are unreliable raise a major question -
>>>> just how valuable are the tests?  With enough unreliable tests, can we ever
>>>> expect a single build to complete successfully?
>>>> 
>>>> How can we ensure the quality of ActiveMQ is maintained, and tests are
>>>> safeguarding the solution from the introduction of bugs, in light of these
>>>> tests?
>>>> 
>>>> *Ideally*
>>>> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
>>>> i.e. so they can help us move in the right direction overall.  These are
>>>> definitely not feasible within any reasonable timeframe.
>>>> 
>>>> Putting on my "purist" hat -- ideally, we would analyze every test to
>>>> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
>>>> by the test.  From there, it would be possible to look for methods of
>>>> distinguishing false-negatives and false-positives (for example, by
>>>> reviewing logs) and improving the tests so they hopefully never end in false
>>>> results.
>>>> 
>>>> Another ideal approach - return to the drawing board and define all of the
>>>> test scenarios needed to ensure ActiveMQ operates properly, then determine
>>>> the most reliable way to cover those test scenarios.  Discard redundant
>>>> tests and replace unreliable ones with reliable ones.
>>>> 
>>>> *Approach for Releases*
>>>> Back to the focus of this thread - let's define an acceptable approach to
>>>> the release.  Here is an idea to get the discussion started:
>>>> 
>>>> - Run the build with the Maven "-fn" flag (fail-none), then review all
>>>> failed tests and determine a course of action for each:
>>>> - Re-run the test if there is reason (preferably a clear, documented
>>>> reason) to believe the failure was a false-negative (e.g. a test that
>>>> times-out too aggressively)
>>>> - Declare the failure a bug (or at least, a suspected bug), create a Jira
>>>> entry, and resolve
>>>> - Replace the test with a more reliable alternative that addresses the
>>>> same underlying concern as the original test
>>>> 
>>>> *Call for Feedback*
>>>> To move this discussion forward, please provide as much negative feedback
as
>>>> necessary and, at the same time, please provide reasoning or ideas that can
>>>> help move things forward.  Criticism (unactionable feedback) is discouraging
>>>> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
>>>> even for small, easily-addressed issues, without any offer to assist is
>>>> getting old.  I dream of seeing "-1, file <x> needs an update; I'll
take
>>>> care of that myself right now."
>>>> 
>>>> *Wrap-Up*
>>>> Let's get this solved, continue with frequent releases, and then move
>>>> forward in improving ActiveMQ and enjoying the results!
>>>> 
>>>> Expect another thread soon with ideas on improving the tests in general.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
>>>> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Mime
View raw message