hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Douglas <cdoug...@apache.org>
Subject Re: [VOTE] Release plan for Hadoop 2.0.5
Date Mon, 13 May 2013 15:33:14 GMT
On Sat, May 11, 2013 at 7:03 AM, Arun C Murthy <acm@hortonworks.com> wrote:
> In the ASF, the RM *does not* have the power to choose bits and pieces of code from SVN.
He can remove bits from SVN - only by veto'ing the changes. [ample quotation of Roy]

A committer can create a branch, push changes to it, and invite others
to work on it. He may subsequently propose the contents of that branch
as a release, begging, convincing, etc. others in the context of a
"release manager". Whether the branching, coding, and testing is done
in an "RM context" doesn't seem particularly important to its
feasibility. But your point is well taken, and it's important to
identify this path as developers forking the branch, not an RM
curating a release. And few would disagree: it's better for developers
to collaborate, rather than working at cross purposes in related

However, the course Andrew (and others) have advocated, where recently
committed features go into a 2.1.x release series instead of 2.0.x, is
not disallowed by any rule. Whenever we've tried to assert that ASF
rules "require" us to accept or reject code, it pushes development
outside of Apache. What needs to happen at this stage is a
negotiation, not a wizards' duel on the bylaws. As Bobby pointed out
in the other thread, stabilization is an active process not a passive
one. We'll all be more successful if we can (a) tranquilize anxiety
about each new feature (b) encourage feature authors to participate in


Summarizing considerably, Konstantin (and others) are anxious to see
2.x stabilize and to ensure trunk stays releasable. One strategy he
proposed: stage large, complex features in more frequent releases. To
come up with alternative proposals that achieve these aims, others
need more context. For all that's been written about software
development generally, there are few details on what, exactly, is
harming stability.

So: for a given feature, are there holes in the test plan? Do you have
benchmarks you'd like to run? What difficult edge cases might have
been missed? If you're not comfortable with the design or
implementation, what action would reassure you that it's ready for a
beta release? If you're coding a downstream project, are the unit
tests covering behaviors you rely on? Are the JIRAs defining
compatibility covering your use of those APIs?

If you're worried about stability of new features: it's legal to fork,
but there are many reasons why that time is better spent helping
others to harden the beta release.

On Fri, May 10, 2013 at 5:48 PM, Andrew Purtell <apurtell@apache.org> wrote:
> It would seem to a humble outsider that project formalism and procedure is
> not the issue, instead that is expectations and impact on the outside
> world. We hear that branch-2 is approaching stability, except when it
> isn't, as evidenced by new downstream project unit test failures at each
> "minor" release, with major new features going in because it's too
> difficult to renumber. (??)

Respectfully, this is against software tagged so explicitly as
"alpha", it's in the version number. Did you file JIRAs and
participate in reviewing the protocols/APIs that changed? Did you
raise your use cases with developers, so they know someone was coding
against that behavior?

The 2.x branch contains years of work from its contributors, who know
how fortunate they are to have downstream projects eager to use them.
But let's not indulge in the fantasy that all our frustrations with
software development are due to others' negligence. If you follow JIRA
traffic, the work on ensuring that branch-2 protocols remain stable
has been the source of most compatibility issues between patch
versions. And several recent discussions have covered how Bigtop can
help Hadoop development know when it breaks downstream projects. -C

View raw message