harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Alexey Petrenko" <alexey.a.petre...@gmail.com>
Subject Re: [general] Discussion: how to keep up stability and fast progress all together?
Date Wed, 04 Apr 2007 07:31:45 GMT
2007/4/3, Vladimir Ivanov <ivavladimir@gmail.com>:
> On 4/2/07, Stepan Mishura <stepan.mishura@gmail.com> wrote:
> > On 3/30/07, Tim Ellison wrote:
> > > Stepan Mishura wrote:
> > > > We made a big progress in improving the project's code base. Just for
> > > > example, a total number of excluded tests in the class library were
> > > > reduced by ~ 60 entries in two months. Also taking into account that
> > > > Windows x86_64 build was enabled and the test suite is growing I think
> > > > this is a good progress indicator for the Class library. The same for
> > > > DRL VM – new testing modes are added and a total number of excluded
> > > > tests for most of modes are reducing. Let's keep up good progress!
> > > >
> > > > But I'd like to attract attention to stability issues. I've been
> > > > monitoring CC status for a couple of months and I my impression that
> > > > situation with stability become worse – a number of reports about
> > > > failures is growing. I'm afraid that if we set keeping good stability
> > > > aside then it may affect the overall project's progress.
> > > >
> > > > I'd like to encourage everybody to pay attention to stability side and
> > > > to hear ideas how to improve the situation?
> > >
> > > Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
> > > project's liveliness, but my apologies in advance for any repetition...
> > >
> > > IMO we won't achieve rock solid stability without focusing on it as an
> > > explicit goal; and delivering a Milestone release is the best way to get
> > > that focus.
> > >
> >
> > Yes, I agree that without focusing on stability it is hard to achive it.
> >
> > > Not exactly a novel or radical idea, but Milestones have a number of
> > > benefits not least of which is that they demonstrate we, as a diverse
> > > group, can converge on a delivery of the code we are working on.  Some
> > > projects will rumble on forever without committing to a stable, tested,
> > > and likely imperfect, packaging of something.
> > >
> > > If the Milestones are time-boxed they also form a natural boundary for
> > > feature planning, and afford some predictability to the project that is
> > > also important.
> > >
> > > In my experience, something like 6 to 8 weeks between Milestones is a
> > > good period of time.  Four weeks is too short to get big ticket items in
> > > and stable, and 12 weeks (a quarter year) is too long such that
> > > instability can set-in.
> > >
> > > In that 6 to 8 week period there should be a time at the end where we
> > > hold back from introducing cool new function, and emphasize testing and
> > > fixing.  Maybe that is the last seven days leading up to the Milestone,
> > > and of course, if instability exists we slip the date until we can
> > > declare a stable point.
> > >
> >
> > Sure this approach makes sense and I think we should accept and follow
> > it. I see only one issue here - it lets instabilities get accumulated
> > and present unnoticed (ignored?) in the code for some period of time.
> > This may result that minor update can have unintended consequences.
> >
> > Currently if we identify a regression we try to find a guilty commit
> > and to fix it or do rollback. I think it is the right way – we keep
> > code base in a good shape and don't let a number of known problems to
> > grow. This approach showed its efficiency and the only thing I can do
> > here is only encourage all contributors to run all available tests
> > after doing any non-trivial change. But it seems for intermittent
> > failures the approach with running all testing scenarios doesn't work
> > well – usually they are not immediately detected. And it's hard to
> > find guilty update after a long time so we tend to put such tests to
> > exclude list.
> >
>
> > I'd like to propose the next approach that may help us to know about
> > instabilities: develop (or take existing one, for example, Eclipse
> > hello world) a scenario for testing stability and configure CC to run
> > it at all times. The stability scenario must be the only one scenario
> > for CC; it must be short (no longer then an hour), test JRE in stress
> > conditions and cover most of functionality. If the scenario fails then
> > all newly committed updates are subject for investigation and fix (or
> > rollback).
> Actually, I prefer something without GUI
I do not think that remove GUI testing from CC and other stability
testing is a good way to go. Because awt and swing modules are really
big and complicated pieces of code.

SY, Alexey

> or at least without using
> special 'GUI testing" tools. It should improve quality of this testing
> (than less tools than more predictable results :)) Current "Eclipse
> hello world" scenario based on the AutoIT for Win and X11GuiTest for
> Linux platform. Also we have this scenario based on API calls which
> should emulate GUI scenario. From these 2 approaches I prefer second
> to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
>
>  Thanks, Vladimir
>
>
> >
> > Thought? Objections?
> >
> > Thanks,
> > Stepan.
> >
> > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > about we pick a proposed date for Apache Harmony M1?
> > >
> > > Regards,
> > > Tim
> > >
> >
> >
> > --
> > Stepan Mishura
> > Intel Enterprise Solutions Software Division
> >
>
Mime
View raw message