harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Tim Ellison <t.p.elli...@gmail.com>
Subject Re: [general] M3 milestone discussion
Date Fri, 24 Aug 2007 09:55:51 GMT
Mikhail Loenko wrote:
> 2007/8/23, Tim Ellison <t.p.ellison@gmail.com>:
>> Mikhail Loenko wrote:
>>> 2007/8/22, Tim Ellison <t.p.ellison@gmail.com>:
>>>> Mikhail Loenko wrote:
>>>>> Basing on M2 experience I think 2-mo is a too short for Harmony:
>>>>> 25% of the whole time we would have our workspace somehow frozen.
>>>>> And we couldn't shorten freeze time since we have long running
>>>>> suites and scenarios.
>>>> That's not my memory, looking back in the list you froze the code on 24
>>>> June, and unfroze it on 30 June.
>>> There was also "feature freeze" message on June, 14th. So it's not 10%.
>> Rather than get into a debate about the %'s, let's decide whether we
>> have the right balance between open development and ensuring
>> stability/demonstrating progress.
>>
>> I'm sure we agree that we would like to minimize the disruption on
>> on-going development, but agree that we need these stability
>> checkpoints.  This (thread) is the first time I see a call for longer
>> open development periods.
>>
>>> We need that length of time to run
>>>> tests and check stability as you mention, but it was more like 10% which
>>>> I think is reasonable given our current state.
>>>>
>>>>> IMHO it negatively affects progress of the project.
>>>>> So I'm +1 for having fixed schedule, but 2-mo schedule does not leave
>>>>> enough time for normal development
>>>> Can you explain what you mean here?  I see lots of 'normal development'
>>>> taking place, with hundreds of commits in each milestone.
>>> We declare that our milestone builds are "best so far". That actually mean
>>> that we should not have (at least known) regressions.
>> Agreed.
>>
>>> We have a huge amount of tests and it's impossible to run them all
>>> before each commit. For that reason many commits introduce regressions.
>> Well hopefully not 'many commits' but it is a possibility yes<g>
>>
>>> Now the question is what %% of time we may focus on development of new
>>> features vs time on fixing regressions. Basing on CC results, it might take
>>> up to 2-3 weeks to fix regressions introduced by a commit (some scenarios are
>>> down for even longer time).
>> Is that because people are not looking at the CC results and fixing
>> them, or that we are short of machines to crunch through the scenarios?
> 
> the more machines the better. Currently BTI scenarios run on ~30 machines,
> but still it may take up to a week to notice a regression, the reasons are:
> 
> we have many long-running scenarios
> some failures are intermittent and thus not necessary regressions
> some failures caused by side effects (e.g. we have tests that read/write files)
> there are failures that are not reproducible when a single test is run
> (they reproducible only when the whole suite runs)
> and more
> 
> Tim has mentioned how many commits we make, so it takes time to identify
> guilty commit and find a reason of regression...
> 
> Well it does not always takes that long to fix the regression, but still
> it's not a 5-minute task.

Doesn't that imply that we check stability more often, rather than let
the side-effects build up over a longer period of time?

>>> So that actually mean that in the 2-mo schedule
>>> we may do full-swing development during ~1 month, do very careful development
>>> 2 more weeks, and be mostly blocked 2 remaining weeks.
>> If we have introduced regressions, then fixing them in those two weeks
>> would seem like a good idea rather than continued open development.  How
>> long do you think is a reasonable time to let regressions ride?
> 
> for short cycle tests (like classlib, drlvm-test) it should be hours.
> For long-running
> scenarios one week is "OK". But sometimes it may take more...
> 
> The problem is we have more than one developer :)

Indeed, so while I have sympathy for Mikhail F' saying that his work
pace may need to adjust to the timing of a milestone, it needs to be set
in the context of everyone else's work affecting him and him affecting
everyone else.

> If there is a single person working on the code and he sees a regression,
> he might stop and fix it.
> 
> If we have two people, A and B, working on
> area1 and area2 and e.g. A has introduced a regression into area1, so that
> for example scenarioX now fails then the question is should B stop his
> work until
> area1 is fixed?
> 
> If it's "hacking time" then B should probably continue development,
> if it's "stabilizing time" then B should probably stop and wait until
> it's fixed.

Agreed, and we don't want to leave it unfixed for too long.

>>> This is what I see in VM, API is definitely different: most changes are rather
>>> isolated.
>> We can certainly tweak the current practice if people feel it is
>> inhibiting the progress they could be making, I just want to ensure we
>> are not trading stability for more hacking :-)
> 
> I think we should maintain stability even when we are hacking. If a new feature
> introduced regressions we should fix them even if it's not "milestone time"

Agreed.

> So IMHO we should base our decision on
> - what ratio between "open development" and "constrained development"
> we'd like to have
> - what is mean time to repair regressions = R
> - how long is our full testing cycle = T
> 
> then milestone shedule would ideally be something like:
> T for code freeze
> R + T for feature freeze
> 
> and period would be
> (R + 2T)/(constrained%)
> adjusted by:
> - how often we think community wants to see stable builds
> 
> ;)

I agree with all that, and if somebody thinks that two months' work is
not enough then let them propose an alternative.  It works for me but we
should seek concensus.

Regards,
Tim

Mime
View raw message