hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sanjay Radia <san...@hortonworks.com>
Subject Re: [DISCUSS] Apache Hadoop 1.0?
Date Thu, 17 Nov 2011 01:11:03 GMT

On Nov 16, 2011, at 3:02 PM, Doug Cutting wrote:
> 
> 
> Another definition is that a major release permits incompatible changes,
> either in APIs, wire-formats, on-disk formats, etc.  This is more
> objective measure.  For example, one might in release X+1 deprecate
> features of release X but still remain compatible with them, while in
> X+2 we'd remove them.  So every major release would make incompatible
> changes, but only of things that had been deprecated two releases ago.
> Often the reason for the incompatible changes is new primary APIs or
> re-implementation of primary components, but those more subjective
> measures would not be the justification for the major version, rather
> any incompatible changes would.

This is mostly consistent with what is stated in wrt to API changes  in HADOOP-5071  on "Hadoop
Compatibility requirements" : https://issues.apache.org/jira/browse/HADOOP-5071.

HADOOP-5071 was derived from a long series of email discussions and describes some of the
subtle nuances for compatibility for API, on-disk format, wire etc.
Some notes (see details there)
1) break in compatibility => major number change, 
  but major number change does NOT => break in compatibility.  
2) we routinely change on disk format on hdfs but do an automatic upgrade. That is okay and
allowed without a major number change.
> 
> Of course, we should work hard to never make incompatible changes...

Agreed.  Once things are in customer hands it is very hard to remove even deprecated methods
(But once in a while we have to have to do it after sufficient time to upgrade to new APIs).
 Java for example does not remove deprecated methods.

sanjay


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message