hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arun C Murthy <...@hortonworks.com>
Subject Re: [DISCUSS] Apache Hadoop 1.0?
Date Wed, 16 Nov 2011 23:05:58 GMT

We will discard features as we go along, but we need to have consensus to discard major features.
Is that fair?

And we discard them for reasons you outlined...


On Nov 16, 2011, at 3:02 PM, Doug Cutting wrote:

> On 11/16/2011 02:43 PM, Arun C Murthy wrote:
>> I propose we adopt the convention that a new major version should be a superset of
the previous major version, features-wise.
> That means that we could never discard a feature, no?
> One definition is that a major release includes some fundamental
> changes, e.g., new primary APIs or a re-implementation of primary
> components.  MR2 probably qualifies as both.  With a large system with
> many APIs and components this becomes a rather subjective measure, but I
> don't see an easy way around that.
> Another definition is that a major release permits incompatible changes,
> either in APIs, wire-formats, on-disk formats, etc.  This is more
> objective measure.  For example, one might in release X+1 deprecate
> features of release X but still remain compatible with them, while in
> X+2 we'd remove them.  So every major release would make incompatible
> changes, but only of things that had been deprecated two releases ago.
> Often the reason for the incompatible changes is new primary APIs or
> re-implementation of primary components, but those more subjective
> measures would not be the justification for the major version, rather
> any incompatible changes would.
> Of course, we should work hard to never make incompatible changes...
> Doug

View raw message