harmony-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Strigun" <vstri...@gmail.com>
Subject Re: [general] Harmony M2 schedule
Date Tue, 05 Jun 2007 11:16:42 GMT
On 6/5/07, Alexey Petrenko <alexey.a.petrenko@gmail.com> wrote:
> M2 - good!
> 2 weeks feature freeze - OK.

I'm ok with the release at the end of the month and 2 week feature freeze.

> 05 Jun 2007 09:06:48 +0400, Egor Pasko <egor.pasko@gmail.com>:
> > On the 0x2EC day of Apache Harmony Mikhail Loenko wrote:
> > > Let's have end of month (June, 30?) as a release date. Now we need to
> > > define a date for code freeze (when only critical bugs are fixed) and
> > > define how we will commit between code freeze and release (each commit
> > > approved by one more committer?)
> >
> > one should be enough. I think, the common process should be well
> > applicable here: we will have comitter responsibility, discussions
> > over dev@, etc. No reason for tight commit process, IMHO.
> >
> > Tightening commit criteria requirements (that you are proposing) is good.
> >
> > > I think the code freeze date should depend on the longest test cycle
> > > we have (I've seen somewhere about 48-hour scenarios?) and be ~2-3
> > > cycles (1 week?) prior the release.
> > >
> > > We also need a feature freeze date (1-2 weeks prior code freeze?) when
> > > no major changes or redesigns are allowed.
> >
> > reasonable, thanks
> >
> > 2 weeks for feature freeze before M2 should be OK, IMHO
> >
> > > And we need to set up requirements for the release. We already see a
> > > good wish-list here. The only concern I have is that its focus is
> > > almost everything: stability, performance, and completeness. Though I
> > > completely agree with each of these directions, I have a feeling that
> > > having everything in focus means not having a focus.
> > >
> > > So I propose that we go this way: we have directions, we already
> > > discussed them many times. Now let's create requirements based on the
> > > list of directions: *each person who adds something to requirements is
> > > committing to and will be responsible for meeting that requirement*
> > >
> > > The requirements could be to have something specific in stability,
> > > have something specific in performance, completeness, java6, etc
> > >
> > > Once we compose a list, say 1..N of requirements, we create keys or
> > > tags for JIRA, say M2-REQ1, ..., M2-REQN and mark bugs affecting
> > > requirements with these key words. So each person would easily find
> > > bugs affecting requirements he is responsible for.
> > 1. why numbering? let it be descriptive requirement names. Example:
> > M2-req-stable-linux-x86_64-regression-tests
> It depends on where these tags will be used. If it will be a usual key
> for summary field like [luni] or [java6] then such a long key will be
> not very usable...
>
> > 2. why req tags for JIRA? Does this help committers to follow their
> > areas of responsibility? If so, they could, please, please, speak
> > up. I thought, all guys follow their bugs, have reasonable priorities
> > regarding them, etc, etc.
> Such tags will help requirements tracking and will describe why
> specific JIRA marked as blocker for M2.
>
> About M2 marking... Can we create something like "Target milestone"
> field in our JIRA with predefined values?
>
> > Requirement proposals?
> Some obvious proposals:
> 1. All the Harmony tests are passed (class library + vm, excluding
> excluded tests)
> 2. Harmony works OK with Eclipse (as M1 does)

I'd like to add the next statement:

3. All Dacapo[1] benches works correctly with all workloads, new
features or patches that will be commit prior to feature freeze
shouldn't introduce degradations.
4. Scimark[2] benchmark - same targets.

AFAIK, we have several opened issues for Dacapo, in particular jython
bench failed on Linux (HARMONY-2137); workaround should be used for
correct execution of antlr bench (HARMONY-2130); 2x degradation was
recently identified for xalan bench (HARMONY-4036). All information
about bench scores and failures updated weekly on performance page[3].


[1] http://dacapobench.org/
[2] http://math.nist.gov/scimark2/
[3] http://harmony.apache.org/performance.html

> Eclipse tests?
>
> SY, Alexey
>

Mime
View raw message