river-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Greg Trasuk <tras...@stratuscom.com>
Subject Re: Development Process - Experimental Testing, Theoretical analysis; code auditing and Static Analysis
Date Mon, 06 Jan 2014 02:11:18 GMT

On Jan 4, 2014, at 6:15 PM, Peter <jini@zeus.net.au> wrote:

>> 
>> “we’re using final variables, therefore all our code has to change”
>> (paraphrasing) are no substitute for reasoned analysis.  
> 
> You don't agree that final field usage should comply with the JMM?
> 
> Please also read the Java Memory Model.
> 

JMM is part of the JLS now. I read it.  I read with particular interest the parts about the
many possibilities for when the values of final variables are frozen.

>> 
>> I teach programming.   I see this all the time.   When people make changes
>> based on what they “think” “might” be happening, it’s always a disaster.
> 
> You're assuming FindBugs,  JMM non-compliance and common issues described by Concurrency
in Practise, when clearly and easily identified fall into the same category as someone who's
learning to code for the first time?
> 
> You're advocating a completely experimental driven approach, like someone who's learning
to program experiences.  It is very good approach for learning…
> 

Please don’t put words in my mouth.  I’m not talking about experimental vs theoretical.
 I’m talking about diagnostic techniques.  You need to have a theory about what a problem
is, gather information to support or disprove that theory (that could be either analysis or
experiments), and only then make changes to the code.  Then gather information that proves
your fix actually made a difference (again, could be analysis or experiments).

> 
> This would prevent a move from experimental to include a more theoretical development
approach.  Test cases are still required to pass with a theoretical approach.
> 

Not at all.  “Findbugs reports a possible data race in TaskManager” is a perfectly good
JIRA issue that is specific enough to be actionable and should be limited enough in scope
that peers can review the solution.

> and includes the community in decisions around how to fix
>> the problem that is stated. 
> 
> And when a simple discussion about a simple issue veers off topic with heated arguments,
what then?  You propose I stop committing to skunk, until each change is discussed at length
on the list?  But we can't even agree on final field usage?
> 
> We need to make a decision whether to include theoretical driven development that includes
compliance to standards like the JMM.
> 

As I’ve said before, my concern is not theoretical vs experimental.  It’s “what is this
change, why was it made, what are the effects, and are its effects justifiable?”

> Code has been committed to skunk, for people to review, in public, they might misinterpret
my suggestions on the mail list, but they will have a better understanding after reading the
code, I welcome their participation.
> 

There are 1557 files modified between 2.2.2 and qa_refactor.  62 deleted.  214 added.  How
practical is it to review?

> I am open to other solutions, but I don't like the suggestion that non compliant code
is preferred until a test case that proves failure can be found.  What happens when the test
case only fails on one platform, but is not easily repeatable?  What if the fix just changes
timing to mask the original problem?  
> 
> Testing doesn't prove the absence of errors?  Reducing errors further requires additonal
tools like FindBugs and public open peer review.
> 
> Trunk development was halted because of unrelated test failures.

It was halted because the community got nervous and started asking for review-then-commit,
so you moved over to qa_refactor.  From 2.2.2 to trunk - 1225 files modified, 32 deleted,
186 added.  Same problem, which we discussed last year.

Related email threads: 

http://mail-archives.apache.org/mod_mbox/river-dev/201211.mbox/%3C50B49395.30408%40qcg.nl%3E
http://mail-archives.apache.org/mod_mbox/river-dev/201211.mbox/%3CCAMPUXz9z%2Bk1XBgzfCONmRXh8y5yBtMjJNeUeTR03YEepe-60Zg%40mail.gmail.com%3E

In the end, you’re right - qa_refactor is a skunk branch, so you can go ahead and make all
the changes you want. I just think that if your intention is to get at least two other people
to sign off on it as a release, you’d do well to make it easy to review, and possibly even
include the community in the decision-making along the way.

Regards,

Greg.





Mime
View raw message