cassandra-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Blake Eggleston <beggles...@apple.com>
Subject Re: [DISCUSS] Implementing code quality principles, and rules (was: Code quality, principles and rules)
Date Tue, 28 Mar 2017 05:13:53 GMT
In addition to it’s test coverage problem, the project has a general testability problem,
and I think it would be more effective to introduce some testing guidelines and standards
that drive incremental improvement of both, instead of requiring an arbitrary code coverage
metric be hit, which doesn’t tell the whole story anyway.

It’s not ready yet, but I’ve been putting together a testing standards document for the
project since bringing it up in the “Code quality, principles and rules” email thread
a week or so ago.

On March 27, 2017 at 4:51:31 PM, Edward Capriolo (edlinuxguru@gmail.com) wrote:
On Mon, Mar 27, 2017 at 7:03 PM, Josh McKenzie <jmckenzie@apache.org> wrote:  

> How do we plan on verifying #4? Also, root-cause to tie back new code that  
> introduces flaky tests (i.e. passes on commit, fails 5% of the time  
> thereafter) is a non-trivial pursuit (thinking #2 here), and a pretty  
> common problem in this environment.  
>  
> On Mon, Mar 27, 2017 at 6:51 PM, Nate McCall <zznate.m@gmail.com> wrote:  
>  
> > I don't want to lose track of the original idea from François, so  
> > let's do this formally in preparation for a vote. Having this all in  
> > place will make transition to new testing infrastructure more  
> > goal-oriented and keep us more focused moving forward.  
> >  
> > Does anybody have specific feedback/discussion points on the following  
> > (awesome, IMO) proposal:  
> >  
> > Principles:  
> >  
> > 1. Tests always pass. This is the starting point. If we don't care  
> > about test failures, then we should stop writing tests. A recurring  
> > failing test carries no signal and is better deleted.  
> > 2. The code is tested.  
> >  
> > Assuming we can align on these principles, here is a proposal for  
> > their implementation.  
> >  
> > Rules:  
> >  
> > 1. Each new release passes all tests (no flakinesss).  
> > 2. If a patch has a failing test (test touching the same code path),  
> > the code or test should be fixed prior to being accepted.  
> > 3. Bugs fixes should have one test that fails prior to the fix and  
> > passes after fix.  
> > 4. New code should have at least 90% test coverage.  
> >  
>  

True #4 is hard to verify in he current state. This was mentioned in a  
separate thread: If the code was in submodules, the code coverage tools  
should have less work to do because they typically only count coverage for  
a module and the tests inside that module. At that point it should be easy  
to write a plugin on top of something like this:  
http://alvinalexander.com/blog/post/java/sample-cobertura-ant-build-script.  

This is also an option:  

https://about.sonarqube.com/news/2016/05/02/continuous-analysis-for-oss-projects.html  

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message