harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Egor Pasko <egor.pa...@gmail.com>
Subject Re: [DRLVM] General stability
Date Wed, 08 Nov 2006 06:48:48 GMT
On the 0x21B day of Apache Harmony Oleg Oleinik wrote:
> Such model works but there is a risk of fixing again "from scratch" those
> bugs which were fixed once on the previous milestones.

sometimes it is easier to fix a couple of bugs "from scratch" than to
spend large amount of resources on regular complex checks (that also
do not guarantee 100% stability)

> We can eliminate this if follow "no regression" policy - if something works
> (classlib unit tests, Tomacat or Eclipse Unit Tests pass 100%, for example),
> it should continue working - any regression is a subject for reporting and
> fixing as soon as possible (it is easier to find root cause and fix since we
> will know which commit caused regression).
> 
> Will this model work? Isn't it a little bit better than focusing on runtime
> stability periodically?

"no regression" policy should be relevant to a number of *small* tests
that are easy to run and are running fast, to make them good as
pre-commit criteria.

Complex workloads _cannot_ be run as a pre-commit criteria. So there
_should be regressions_. That's because:
* we cannot afford to run them as pre-commit
* we cannot afford complex rollbacks and stop-the world

Many successful projects (probably, all of them) have stability
periods, even stability releases (and, yes, stability branches). That
is considered effective. And IMO our project should act the same.

We _have to_ allow some bugs to continue active development. But not
too many. It is always a tradeoff.

To summarize. I support you idea to improve the regression test base
and infrastructure. Let it be a step-by-step improvement. Then we can
decide which tests to run as pre-commit and which are to measure the
overall stability.

> On 11/8/06, Tim Ellison <t.p.ellison@gmail.com> wrote:
> >
> > I wouldn't go so far as to label issues as "won't fix" unless they are
> > really high risk and low value items.
> >
> > It's useful to go through a stabilization period where the focus is on
> > getting the code solid again and delaying significant new functionality
> > until it is achieved.  A plan that aims to deliver stable milestones on
> > regular periods is, in my experience, a good way to focus the
> > development effort.
> >
> > Regards,
> > Tim
> >
> > Weldon Washburn wrote:
> > > Folks,
> > >
> > > I have spent the last two months committing patches to the VM.  While we
> > > have added a ton of much needed functionality, the stability of the
> > system
> > > has been ignored.  By chance, I looked at thread synchronization design
> > > problems this week.  Its very apparent that  we lack the regression
> > testing
> > > to really find threading bugs, test the fixes and test against
> > regression.
> > > No doubt there are similar problems in other VM subsystems.   "build
> > test"
> > > is necessary but not sufficient for where we need to go.  In a sense,
> > > committing code with only "build test" to prevent regression is the
> > > equivalent to flying in the fog without instrumentation.
> > >
> > > So that we can get engineers focused on stability, I am thinking of
> > coding
> > > the JIRAs that involve new features as "later" or even "won't
> > fix".  Please
> > > feel free to comment.
> > >
> > > We also need to restart the old email threads on regression tests.  For
> > > example, we need some sort of automated test script that runs Eclipse
> > and
> > > tomcat, etc. in a deterministic fashion so that we can compare test
> > > results.  It does not have to be perfect for starts, just repeatable and
> > > easy to use.  Feel free to beat me to starting these threads :)
> > >
> >
> > --
> >
> > Tim Ellison (t.p.ellison@gmail.com)
> > IBM Java technology centre, UK.
> >

-- 
Egor Pasko


Mime
View raw message