hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <wikidi...@apache.org>
Subject [Hadoop Wiki] Update of "Release1.0Requirements" by SanjayRadia
Date Mon, 20 Oct 2008 22:13:52 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The following page has been changed by SanjayRadia:
http://wiki.apache.org/hadoop/Release1%2e0Requirements

------------------------------------------------------------------------------
  However we can easily support backward compatibility when new methods are added. I am not
sure if this subset can be documented in the
  backward-compatibility policy (more correctly I don't know how to word it); however I will
file a jira to change the rpc layer to detect and throw exceptions on missing methods. I will
also follow that up with a patch for that jira. This will allow is us to treat the addition
of a new method as a backward compatible changes that does not require the version # to be
bumped.--SanjayRadia''
  
- ''Sanjay, about versioning RPC parameters: On the mailing list I proposed a mechanism that,
with a small change to only the RPC mechanism itself, we could start manually versioning parameters
as they are modified.  Under this proposal, existing parameters implementations would not
need to be altered until they next change incompatibly.  It's perhaps not the best long-term
solution, but it would, if we wanted, permit us to start requiring back-compatible protocols
soon. --DougCutting''
- 
- '' Yes I saw that and am evaluating it in the context of Hadoop. I have done similar things
in the past and so know that it does work. I will comment further in that email thread. Thanks
for starting that thread. --SanjayRadia''
- 
- '''Doug has initiated a discussion on RPC versioning in a email thread sent to ''core-dev@hadoop.apache.org''
with the subject ''RPC versioning'' - please read and comment there.'''
  
  === Time frame for 1.0 to 2.0 ===
  What is the expectation for life of 1.0 before it goes to 2.0 Clearly if we switch from
1.0 to 2.0 in 3 months the compatibility benefit of 1.0 does not deliver much value for Hadoop
customers. A time frame of 12 months is probably the minimum.
@@ -91, +86 @@

  
  ''We already have manually maintained versions for protocols.  Automated versions will make
some things simpler (e.g., marshalling) but won't solve the harder back-compatibility problems.
 We could manually version data types independently of the protocols by adding a 'version'
field to classes, as is done in [http://svn.apache.org/viewvc/lucene/nutch/trunk/src/java/org/apache/nutch/crawl/CrawlDatum.java?view=markup
Nutch] (search for readFields method), but that method doesn't gracefully handle old code
receiving new instances.  A way to handle that is to similarly update the write() method to
use a format compatible with the client's protocol version.  Regardless of how we version
RPC, we need to add tests against older versions. --DougCutting''
  
- ==== Discussion on Manual vs Automated Scheme for Versioning ====
- A manual scheme is too messy and cumbersome. An automated scheme a la Protocol Buffers,
Etch, Thrift, Hessian should be used.
+ ''A manual scheme is too messy and cumbersome. An automated scheme a la Protocol Buffers,
Etch, Thrift, Hessian should be used. -- SanjayRadia''
  
- For 1.0 we should not switch Hadoop's RPC mechanism. That is a good change to target for
2.0, when the RPC landscape is clearer.  We have very specific performance needs for RPC that
these other mechanisms do not yet support. -DougCutting''
+ ''For 1.0 we should not switch Hadoop's RPC mechanism. That is a good change to target for
2.0, when the RPC landscape is clearer.  We have very specific performance needs for RPC that
these other mechanisms do not yet support. -DougCutting''
  
  
+ ''Sanjay, about versioning RPC parameters: On the mailing list I proposed a mechanism that,
with a small change to only the RPC mechanism itself, we could start manually versioning parameters
as they are modified.  Under this proposal, existing parameters implementations would not
need to be altered until they next change incompatibly.  It's perhaps not the best long-term
solution, but it would, if we wanted, permit us to start requiring back-compatible protocols
soon. --DougCutting''
+ 
+ '' Yes I saw that and am evaluating it in the context of Hadoop. I have done similar things
in the past and so know that it does work. I will comment further in that email thread. Thanks
for starting that thread. --SanjayRadia''
+ 
+ '''Doug has initiated a discussion on RPC versioning in a email thread sent to ''core-dev@hadoop.apache.org''
with the subject ''RPC versioning'' - please read and comment there.'''
  
  === RPC server that's forward-friendly ===
  

Mime
View raw message